Get perceptual hash value for image using command line

Questions and postings pertaining to the development of ImageMagick, feature enhancements, and ImageMagick internals. ImageMagick source code and algorithms are discussed here. Usage questions which are too arcane for the normal user list should also be posted here.
snibgo
Posts: 9756
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: Get perceptual hash value for image using command line

Post by snibgo » 2016-08-23T11:12:15-07:00

As far as I can see, any sRGB image converted to either HCL or HCLp results in the same pixel values. I'd like to see any counter-examples.

Where they differ is in the handling of colours that are not in the sRGB gamut, when these are converted from HCL or HCLp to sRGB. Colours that are in sRGB gamut get the same treatment, but colours out of gamut get different treatment.

Hence, converting from sRGB to HCL (or HCLp), tweaking colours in that colorspace, and converting back to sRGB can give different results. We can see this with:

Code: Select all

%IM%convert hald:10 -colorspace HCL -evaluate pow 0.5 -colorspace sRGB h1.png

%IM%convert hald:10 -colorspace HCLp -evaluate pow 0.5 -colorspace sRGB h0.png

%IM%compare -metric RMSE h0.png h1.png NULL:
4973.94 (0.0758974)
snibgo's IM pages: im.snibgo.com

User avatar
fmw42
Posts: 22683
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Get perceptual hash value for image using command line

Post by fmw42 » 2016-08-23T11:21:53-07:00

Thanks for that clarification. Where all this started for me was in attempting to match Photoshop colorize. See viewtopic.php?f=1&t=23803&hilit=HCLp especially the part at viewtopic.php?f=1&t=23803&hilit=HCLp#p101120 and viewtopic.php?f=1&t=23803&hilit=HCLp#p101133

User avatar
fmw42
Posts: 22683
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Get perceptual hash value for image using command line

Post by fmw42 » 2016-08-23T11:49:16-07:00

It seems very odd to me that you do not find combining the top two individual colorspaces to get the best pair of colorspaces. Any thoughts on that?
snibgo wrote:For the calculation of -metric phash comparisons between images, I recommend that:

IM squares the differences and sums them, as at present.
IM then divides by the appropriate number of channels.
IM then takes the square root. Thus we have the conventional RMS calculation.
(For comparative purposes, taking the square root is not important. But taking the correct mean is important.)
Would you clarify with regard to dividing by the number of color channels? Did you test with and without this normalization and did i really make a difference? I understand that RMSE is more common practice than just MSE, but is it worth the time to do the square root?

snibgo
Posts: 9756
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: Get perceptual hash value for image using command line

Post by snibgo » 2016-08-23T12:20:32-07:00

fmw42 wrote:It seems very odd to me that you do not find combining the top two individual colorspaces to get the best pair of colorspaces. Any thoughts on that?
Sorry, I don't understand what you mean.

Calculating the means correctly is important because, I hope, IM will allow us to use any number of channels from any number of colorspaces. If IM calculates the mean correctly, then the numbers that come out of xyY+IQ+SB (7 channels) will be in the same range as those from from xyY+YIQ+HSB (9 channels) or whatever. If the mean isn't calculated correctly (or the simple sum is taken) then numbers from one colorspace combination can't be compared with those from another combination, so it's much harder to write software that needs thresholds.

I agree that the square root isn't highly important. What is the cost of the square-root, compared to the other number-crunching? I don't know.

The mean and square-root issues arise only in the second part of the calculation, which crunches the 42 (or whatever) values from two images to make a single "distance" number, which is needed in "-metric phash". Personally, I doubt that will ever be used much, because the cost of calculating the PH values of the images is so high (totally dwarfing the square-root). So I don't think what IM does here matters much. I expect IM will be used to calculate image PHs, and these will be stored in a database, and application software will do the RMS calculation.
snibgo's IM pages: im.snibgo.com

User avatar
fmw42
Posts: 22683
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Get perceptual hash value for image using command line

Post by fmw42 » 2016-08-23T13:06:51-07:00

snibgo wrote:
fmw42 wrote:It seems very odd to me that you do not find combining the top two individual colorspaces to get the best pair of colorspaces. Any thoughts on that?
Sorry, I don't understand what you mean.
Just curious if you have any idea why the two best individual colorspaces do not combine to give the single best dual colorspace result?

I understand now about the need for the mean. It is not more accurate, but simply a way to make comparisons of different combinations colorspace (channels) easier. I had thought (erroneously) that you were indicating you had gotten better results by normalizing by the number of channels.

snibgo
Posts: 9756
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: Get perceptual hash value for image using command line

Post by snibgo » 2016-08-23T13:38:18-07:00

fmw42 wrote:Just curious if you have any idea why the two best individual colorspaces do not combine to give the single best dual colorspace result?
Oh, I see what you mean. Well, the situation is quite close to that. The top two single colorspaces are HSB and YDbDr. As a pair, they rank equal 8th out of 435, so quite close to the top.

I think that the score from every pair is better than either colorspace alone (but I haven't checked this). I aso think the ranking of multiple colorspaces depends on how "different" the colorspaces are. Do they measure different things? Down at the bottom of the rankings we have pairs like (YCbCr,YCC), (CbCr,YCbCr) and (CbCr,YCC) where (I assume) the colorspaces within each pair are very similar.
fmw42 wrote:I understand now about the need for the mean. It is not more accurate, but simply a way to make comparisons of different combinations colorspace (channels) easier.
That's it, exactly.

Where colorspaces (or combinations of colorspaces) rank close together, I don't think we can say for certain which is best. When I adjust the attacks slightly, the ranking order also changes slightly. Which combination of attacks is most representative of our needs? I don't think we can really say.

And there are other ways of doing rankings. I've grouped results by number of colorspaces. Instead, I might group by total number of channels, or lump them all together but apply a handicap to combinations with more colorspaces.
snibgo's IM pages: im.snibgo.com

snibgo
Posts: 9756
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: Get perceptual hash value for image using command line

Post by snibgo » 2016-09-30T08:27:53-07:00

I've uploaded a new page, "Perceptual hash thresholds". This uses the same images as my previous page, trialling all colorspace combinations from 1 to 4 colorspaces. It uses the Matthews Correlation Coefficient (from false positive and false negative rates) as a measure of goodness to find the ranking order of combinations.

Conclusion in a nutshell: as before, more colorspaces are better, but the difference is not as profound. With four colorspaces, the best combination is CbCr+HSB+IQ+xyY. With only two, the best is OHTA+UV.
snibgo's IM pages: im.snibgo.com

User avatar
fmw42
Posts: 22683
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Get perceptual hash value for image using command line

Post by fmw42 » 2016-09-30T09:36:14-07:00

Why do you think that the result of OHTA + UV are a bit better than OHTA + YUV. In other words, why do you think better results are achieved by dropping Y and using only the other two channels of YUV, YIQ, YCbCr, etc?

snibgo
Posts: 9756
Joined: 2010-01-23T23:01:33-07:00
Authentication code: 1151
Location: England, UK

Re: Get perceptual hash value for image using command line

Post by snibgo » 2016-09-30T09:49:14-07:00

The first channel of OHTA represents "lightness", which isn't identical to the Y of YUV, but is very similar. If we include YUV, we are effectively giving more weight to lightness than to the colour channels. So the score for OHTA+YUV is different to OHTA+UV. For this set of images, the score is worse.

However, it isn't much worse. Yuv+OHTA came fifth (out of 435 combinations), with MCC=0.789002.

Much the same can be said about the comparison between CbCr+OHTA and OHTA+YCbCr, etc.
snibgo's IM pages: im.snibgo.com

User avatar
fmw42
Posts: 22683
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Get perceptual hash value for image using command line

Post by fmw42 » 2016-09-30T09:53:25-07:00

I had a feeling that would be the answer. Thanks

User avatar
fmw42
Posts: 22683
Joined: 2007-07-02T17:14:51-07:00
Authentication code: 1152
Location: Sunnyvale, California, USA

Re: Get perceptual hash value for image using command line

Post by fmw42 » 2016-09-30T09:57:10-07:00

Interesting that sRGB does not show in any of your combinations.

OHTA is an approximation of PCA, which tries to make the channels orthogonal. Perhaps that is one reason it might work well.

However, when I tested originally, OHTA and RGB did not do as well as HCLp and RGB. Granted my analysis was much simpler, looking only for best separation of positives and negative matches. I only tested a few combinations of RGB + other colorspaces (HCLp, OHTA, YCbCr, HSI)

Post Reply