Color Channel static are output from verbose "identify".
A average color is simply a -scale 1x1\!
, then output to a "txt:" image. http://www.imagemagick.org/Usage/compare/#metrics
I have in the last couple of days been wrestling with finding metrics (image fingerprints) for reducing the number of images being compared.
First, the bigger problem in comparing images isn't changes in hue or color,
or even chnaging a few pixels. It matching images that have been cropped or trimmed!!!!
From my own experiments a 'average color' to 16 bit value is only a bare mininum finger print, and rather useless. Averaging colors is a sharpening process, a few pixels does not change the value much. A hue change however will.
A better metric I am finding is to use a 3x3 matrix of average colors (after the outside 10% borders is removed).
If hue changes is a big problem, you can subtract the overall average color from the 3x3 color matrix, leaving a pattern of color differences representing the image. That will remove the effect of the overal image color, though not a contrast change.
I have added this to the above 'compare' examples page. but it may be a day or so before it appears.
My own personal image metric tests have been with a 'colorless' edge deteted metric. This should be basically indepandant of image color/brightnes/contrast chnages. But then most of the images I compare are cartoon like.
See the new section at the bottom of the comapre page...
I create a 'edge' image such as...
- Code: Select all
convert logo: -scale 100x100\! -median 3 \
-quantize YIQ +dither -colors 3 -edge 1 \
-colorspace gray -blur 0x1 outline_image.gif
Which is itself a good image to 'compare two images' without worying about color difference, though I am still working out the best way to do this.
The metric itself is a 3x3 matric of the greyscale image...
- Code: Select all
convert outline_image.gif -scale 3x3\! -compress none pgm:- | tail -1
4433 7672 3951 541 10105 5031 0 4083 6323
I wrote scripts ot generate and cache these metrics, so they only need to be done once per image, as needed.
I also used -colors
rather than the more logical -segment
as that later is so SLOW!!!!! It also appears to be wrong when you compare IM's version with the results of the segmentation algorthim from leptonica
is great for ignoring fine detail in images.
When I compare these metrics I discount the 'corner' metric that has the worst difference, that way I avoid logos that may have been added to an image.
This works!!! That simple metric seems to find all near matching images. and seems to best to compare the metric using a 'manhatten' comparision (add the differences), rather than a multi-dimensional distance (pythagorian) or reject is any difference is beyond a 'threeshold' (hyper-cube).
NOTE: metrics will always find more non-matching and matching. its job is to reduce the number of full image compares you do, not actullay to find matching images. That is the job of the next stage, comparing images with simular metrics.
If you like to contact me personally I would love to here of your own ideas and problems. Anthony_AT_griffith_DOT_edu_DOT_au
Or just continue this thread.