Interesting problem. A general assumption in image processing is that we are dealing with real colours, those inside the CIE horseshoe. In principle, we know the transformation each camera makes from "scene referred" to "camera referred", so we know the inverse of each transformation, so we can transform the output from either camera to look as if it was made by the other. Or we can take short cuts and consider just gray balance (my page you linked) and contrast and saturation and so on.
But the general assumption isn't true for you. One of your cameras captures non-real colours, colours you can't see, beyond the red end of the spectrum. It transforms those colours into visible ones, colours you can see on the computer screen or whatever. I guess that it also "shuffles up" ordinary visible red colours, to make room for the colours that were infrared, but I could be wrong.
I have no experience of IR, so I can only speculate.
Two important factors are: the amount of IR in the illumination, and the IR reflectivity of the object. About half the radiation from the sun is IR. Does this proportion vary by time of day, or moisture in the air? I don't know. What proportion of light from blue sky or cloud is IR? I don't know, but I suspect not much. How much from a sunset? Different artificial lights will have different proportions of IR (eg inefficient incandescent versus modern fluorescent or LED). Green vegetation is a good IR reflector. In addition, all objects above absolute zero emit IR radiation, so cameras sensitive to IR can see people even when there is no light source.
You might get hold of a photographic IR filter, the type that screws to the front of a lens. Or even just a piece of IR gel. (Or improvise with a rectangular container of water!) Then you can take photos of the same scene with the same camera and lens, with and without the IR filter. If you also have an ordinary camera with a sensor IR filter, then take the same photo, and you have three images. You can create a series of tests with different illuminations (midday, before sunset, after sunset, artificial lights), and of different objects (gray card, people, foliage, landscape, skyscape). Test with auto-exposure, but also the same manual exposure for all three photos.
(We have a naming problem: the IR filter at the sensor blocks
IR light ("cut-IR"), but the IR filter at the front of the lens blocks everything except
These tests will show you what the Canon 60Da does to images in the presence or absence of IR. The photos with the "pass-IR" filter measure how much IR there is. I would expect an obvious tonal shift, and probably a red-shift in the areas where there is most IR present. For example, as shown on your page, the 60Da shows the red sky as lighter and redder, but doesn't change the blue sky.
For a constant subject (eg sunsets with black silhouette foreground) I expect a transformation from one camera to the other might be quite simple. Two stages:
1. Create a mask that is white where you have most IR, black where there is none, and gray in between. This is made from the image, according to redness and brightness. For example:
Code: Select all
magick input.png ( +clone -fill Red -colorize 100 ) -compose Difference -composite -grayscale RMS -auto-level mask.png
2. Create a transformation from "ordinary camera" to "IR camera". This might be a red shift and lighter tones, eg "-modulate 120,100,80".
3. Composite with the mask, eg:
Code: Select all
magick toes.png ( +clone -modulate 120,100,80 ) mask.png -compose Over -composite out.png
Sorry, I've rambled a bit.