snibgo wrote: ↑
1. When I'm editing interactively,
I'm curious, what IM-based interactive editing software are you using?
snibgo wrote: ↑
I chop and change the image between colorspaces as the need arises, but the main edit window shows a sRGB version of the image. So the knobs and sliders are labelled L,a,b or L,u,v or whatever, but I see the effect on the sRGB image. At the end of the session, the editor gives me the exact sequence of IM commands that do my actions.
Maybe a helpful way of thinking about it is that there is a thing called an "image" which itself ultimately represents either "the (human-vision-)relevant aspects of the physical light entering a (hypothetical) camera from the represented (hypothetical) scene" (scene-referred) or "what it should look like to a human when viewed" (observer-referred), but in either case irrespective of how it is represented or stored at the moment.
Let's assume observer-referred for the moment for simplicity (even though HDRI images often can't really be considered observer-referred since they may exceed the dynamic range of human vision, or at least the dynamic range of any practical viewing device, until after some processing like tonemapping).
Ideally there should
be no such thing as "viewing an sRGB image", but rather "viewing an image", because no matter what colorspace the image might be stored in (or other technical aspects of how it is stored), there should be a well-defined way of ultimately displaying it on any device so that it appears as intended -- that is the purpose of using a well-defined colorspace, so that there is a well-defined way of transforming it for the output device (via a well-understood wide-gamut standardized intermediate colorspace, typically XYZ in ICC profile color management but perhaps sometimes Lab for explicitly observer-referred images).
So I would say that what you are doing is seeing the effect on the image
itself, not merely on the "sRGB image". Of course if you were to display a non-sRGB image as if
it were an sRGB image then you would not be viewing the represented image correctly. So, really, any editing software should transform the image (stored in what ever colorspace) for viewing it correctly. When you change the colorspace or "working colorspace" of the image in an image editor, it should *not* appear different on the screen, aside from any technical limitations such as loss of image information from going through a colorspace that is not able to represent the full fidelity of the image (like a linear colorspace stored in 8-bit or even 16-bit integer format, or too narrow a gamut or dynamic range for the source image).
This is why when we save an image to a file, we should think of it as saving the underlying image
, not as saving "the linear image" or saving "the image in sRGB colorspace". The colorspace in which it's actually stored in the file should only be subject to "technical" and pragmatic considerations like minimizing loss of fidelity, maintaining compatibility with other software that might read it, and keeping the file size in bytes low enough. So long as we have effectively stored the image, it should look the same when we decode and display it properly, and it can be transformed into other colorspaces for purposes like applying particular image processing algorithms.
And, when reading an image from a file, we should *by default* convert it into a consistent default colorspace for further operations, unless the user explicitly chooses one. This way, the colorspace in which the image happens to have been *stored* does not affect all the operations we may perform on it.
[+EDIT:] And perhaps the user could optionally define a working colorspace
, to which the image is implicitly re-transformed after every operation. With floating-point this shouldn't produce much loss of fidelity.
(Although even better in terms of fidelity and processing time would be if there were some lazy evaluation for colorspace transformations, so that if the logical colorspace is transformed, but then immediately transformed back by the next operation for its own purposes before the image was actually read for processing, the two transformations would cancel and there would be no transformation calculation physically performed.)
Of course, this is not the way that IM works currently
, and I think this is part of what makes the results unpredictable unless the user delves into the gory details of which colorspace the image is being stored, processed, and saved in at each step and each time we read, process, or save an image. (Or they give up and disable colorspaces completely and do their own colorspace management as you do, which probably isn't practical for most users.)
snibgo wrote: ↑
Increase the contrast of the image using a sigmoidal transfer function without saturating highlights or shadows.
Well, that's true when the image is RGB. But applying sigmoidal-contrast to the a or b channel of LAB affects the colour saturation, not the contrast, of the image.
Right, and this makes me think that we should, for each colorspace, label which channels increase with the intensity, i.e. those related (although not always linearly) to the amount
of light from the scene. For example L in Lab, V or B in HSV or HSB, but all three channels in RGB or sRGB, are intensity-dependent, while a,b,S,V,B aren't.
An operation intended to change the contrast of the image intensity should, by default, act on these channels only, although the user should of course be able to choose explicitly which channel(s) it should act on.
snibgo wrote: ↑
I tell IM it is really in sRGB because I don't want IM to do any automatic conversions.
What about -set colorspace None instead of -set colorspace sRGB? Same idea but it seems less sneaky