A few quick comments, off the top of my head (TODO list is a bit too tall these days to actually try stuff):
RE: blending the result of downsampling with (tensor, that is, -resize) Mitchell and the result of downsampling with (tensor) Lanczos http://forums.dpreview.com/forums/read. ... e=41369406:
This is a nice idea. Because the main weakness of tensor Lanczos (and actually all tensor methods with negative lobes) is that their cardinal basis function has a strong "checkerboard" component, I would not be surprised if you actually would get better results by blending tensor Lanczos with the result of downsampling with an EWA (-distort Resize) method, for example, Robidoux, or Lanczos or LanczosSharp (the current built-in, or the "improved versions" discussed in viewtopic.php?f=22&t=19636&start=120#p83719)
When I have time, I actually may figure out what blend of (tensor) Lanczos and some version of EWA Lanczos leads to minimax (= "best worst") over- and undershoot.
RE: Resampling through linear light (the "gamma corrected" toolchain) VS resampling in sRGB http://forums.dpreview.com/forums/read. ... e=41378321:
I would not assume that resampling through linear light is ALWAYS better. Some results from the upcoming Masters thesis of my student Adam Turcotte suggest that, at least when a significant portion of the toolchain involves sRGB (e.g. conversion from RAW; I really don't know exactly what the contributing factors are), resampling directly with sRGB values may give better results. (I hope I am not putting a giant foot in my mouth here, and that my questioning orthodoxy does not come from faulty coding or drawing general conclusions from narrowly defined tests.)
There are very strong heuristic arguments justifying the use of linear light whenever one does any kind of convolution/filtering (this includes resizing, perspective, warping, blurring, USM...). So, "linear light is better" is a very plausible rule of thumb.
Plausible does not make true.
I wager that Eric Brasseur, in his famous post http://www.4p8.com/eric.brasseur/gamma.html, could have fixed things so that the Dalai Lama disappears when downsized through linear light, but looks fine when processed through sRGB.
In other words, I'd like to hear from people who find that resizing with "plain sRGB" gives better looking result, as well as people who get better results through a gamma corrected toolchain, because I'd like to know myself if one of the two is always better, or if it kind of depends. I'd definitely put my money on linear light. But I would not bet with just any odds.
RE: linear light resampling in IM: The ImageMagick code given by Eric Brasseur is not a very accurate way of going through linear light. It is better to use -colorspace. However, be warned that as of IM version 6.7.5, the -colorspace command changed drastically in its use, and that there were bugs detected in the process of checking that the new set up was functioning properly: viewtopic.php?f=3&t=20751. So, be careful drawing conclusions from an IM version that either had colorspace bugs or used a meaning of the -colorspace which is the opposite of what you expect. IM's -gamma only approximates sRGB (this is unavoidable B/C there is more to sRGB than gamma correction). Although -colorspace is an accurate implementation of the equations that deal with converting from sRGB to linear light with sRGB primaries (and the same black and white points), it may not match the sRGB profile that was used to produce the image. In addition, conversion from RAW invariably involves chopping pieces of one machine and/or abstract colorspace to fit it into another. To some extent, you can't go back to a colorspace you've left, unless the conversion is really "controlled". In theory, you can go back with Perceptual intent; my impression is that in practice you generally can't. Which brings me to...
Another linear light warning: I wish I was more of a colour space expert, so I may be, once again, putting my foot in my mouth, but I believe that you need to pay attention to the rendering intent used to produce the sRGB image you are resizing, and there may be black point/white point issues in the conversion as well. There is actually a huge amount of variation between sRGB profiles. What you "think" gives you linear light may actually only give you an immensely crude approximation. Esp. if anything else than Relative rendering intent was used, and you don't import back with a high quality sRGB profile that contains an accurate inverse conversion for the sRGB profile and rendering intent that was used to encode the image you are resizing. To get an idea of what a mess this is, visit http://www.color.org/srgbprofiles.xalter and note that most everybody still uses v2, and that the quality of v2 profiles is all over the place. So, start reading from the bottom, and notice that there are two different recommended v2 profiles. One advantage of v4 over v2, if I remember, is more carefully specified transformations from sRGB to other colourspaces. This is from the site of the ICC, the guardian organization of sRGB. You may think that you are safe using an embedded profile, but not necessarily, because it could be an old v2 profile with terrible inverse transformations, or using a vague definition of Perceptual rendering intent, or because it embedded a profile which has nothing to do with the internal conversion to sRGB it used, or because your toolchain did a crude or damagingly irreversible conversion at some point. Or because Murphy laid down the law. (Profile experts out there, please correct me if I'm wrong.)
Yet another linear light warning: If I understand correctly, the issue is that there is not ONE linear light (because gamuts are truncated), and that even if there was such a thing, the conversion into and out of "the one linear light" (linear RGB with sRGB primaries and the same gamut, as is done in IM using -colorspace, say) necessarily is corrupted by "linear light inaccuracies" acquired along the long road from the actual scene to the image you are resizing. If we lived in a world of perfectly reversible conversions between completely fixed and standardized colour representations with unchanging parameters, we could trust in the "linear light is better" rule of thumb actually being a law. In the real world, unfortunately, we run the risk of making things worse by incorrectly correcting. Let alone that http://www.kenrockwell.com/tech/color-m ... -wimps.htm. Which is why I'm interested in knowing about cases where a linear light resizing toolchain makes things worse.
RE: Using EWA (-distort Resize) with Lagrange or Catrom http://forums.dpreview.com/forums/read. ... e=41366611:
Although I've never tried it, I would be very very surprised if this consistently gives good results. (Back of the envelope...) If you want an EWA sharper than Robidoux, Mitchell, Lanczos2, Lanczos2Sharp, Lanczos, or LanczosSharp, I would try the experimental method EWA LanczosRadius3:
Code: Select all
convert INPUT.IMG \ -filter Lanczos -define filter:blur=.9264075766146068 \ -distort Resize WIDTHxHEIGHT \ OUTPUT.IMG