stupid pet trick?: JPEG compression with qtable of ones

Discuss digital image processing techniques and algorithms. We encourage its application to ImageMagick but you can discuss any software solutions here.
Post Reply
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

P.S. Using a qtable of ones is possibly pushing things too far. Reason: DCT coefficients can range from 0 to 1024. At quality < 95 with the standard qtables, almost all of them are brought within a 256 range. Consequently, cjpeg, I believe, does not bother rescaling. This is consistent with the standard. Using ones (or quality levels above 95 with the standard tables), unfortunately, extreme DCT values may get clamped back toward middle gray. Consequently, the methods proposed in this thread may have quite strong filtering built in. This would be kinda good for very low quality, but not so much for mid to high quality. I'll post the fix in another thread.

Usable JPEG compression with [1,1,1,...,1] as quantization table.

In http://imagemagick.org/discourse-server ... 22&t=20333, I discuss numerous attempts at improving JPEG compression quantization matrices, many of them in the -sampling-factor 1x1 case. (I'd like to avoid the 2x2 colour artifacts caused by chroma blocks larger than luma blocks, artifacts which may be quite severe at reasonable quality.)

If you've followed the above thread, it may come as a surprise that I'm quite happy with the following quantization matrix:

Code: Select all

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
That is: 64 ones, used for the Y (luma) as well as the Cb and Cr (chroma) channels. You read me right: I use a quantization matrix which is absolutely not recommended by the JPEG Group since it corresponds to -quality 100, which may suggest to some that I've lost my mind. (Well, maybe, but this is another story ;-)) To top it off, I actually use -quality 100!

Key trick: Use progressive encoding to scale the DCT coefficients.

Cjpeg example which actually does work quite well with -sample 1x1, the equivalent of ImageMagick's -sampling-factor 1x1.
Save the above matrix of ones in a text file ("qtable.txt", for example). Also save what's below in another text file ("scans.txt", for example):

Code: Select all

# Nicolas Robidoux's progressive encoding v2012.02.26
# for -sample 1x1 (too few chroma values for 2x2)
0: 0  0  0 3;
1: 0  0  0 3;
2: 0  0  0 2;
0: 1  1  0 2;
0: 2  2  0 2;
2: 1  2  0 2;
0: 3  5  0 2;
1: 1  2  0 3;
0: 6  20 0 3;
0: 21 35 0 4;
0: 36 63 0 5;
To use this qtable/scans pair, create a ppm file (cjpeg is fairly limited w.r.t. what formats it eats), calling it "original.ppm", say, and run

Code: Select all

cjpeg -baseline -optimize -dct float -quality 100 -sample 1x1 -scans scans.txt -qtables qtable.txt -qslots 0 -outfile fido.jpg original.ppm
This will produce a jpg compressed image (fido.jpg) with 8x8 blocks in all channels which is roughly the same size as the one produced with the following cjpeg command, which uses the standard quantization tables and cjpeg's default -sampling-factor 2x2 (16x16 chroma blocks):

Code: Select all

cjpeg -baseline -optimize -dct float -quality 78 -outfile stock.jpg original.ppm
Now, the fido.jpg image, although it is IMHO roughly of the same quality, differs in character in a number of ways (which I like) from the stock image, the main two being
  • Effectively, a low pass filter is applied to it, and consequently it shows less "JPEG block ripples" :-) and fine details :-( than the image produced with the standard.
    Overall, the result looks much better when enlarged than the stock image (unless you want fine detail, of course).
  • it not show the usual wide "red/green bands/checkerboard artifacts" associated with -sampling-factor 2x2.
These differences are quite obvious if, for example, you recompress the image found at http://filmicgames.com/archives/778 after having converted to ppm with, say,

Code: Select all

convert bedroom_huffman.jpg original.ppm
Last edited by NicolasRobidoux on 2012-03-07T05:53:09-07:00, edited 6 times in total.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

Here is one that works reasonably well with -sample 2x2:

Code: Select all

# Nicolas Robidoux's progressive encoding v2012.02.26 
# for -sample 2x2 (chroma values too small for 1x1)
0: 0  0  0 3;
1: 0  0  0 1;
2: 0  0  0 1;
0: 1  1  0 2;
0: 2  2  0 2;
0: 3  9  0 2;
1: 1  2  0 1;
2: 1  9  0 1;
1: 3  5  0 2;
1: 6  9  0 3;
0: 10 20 0 3;
1: 10 14 0 4;
2: 10 20 0 2;
0: 21 42 0 4;
1: 15 20 0 5;
2: 21 35 0 3;
0: 43 63 0 5;
2: 36 63 0 4;
1: 21 63 0 6;
File size wise, this 2x2 scan script matches about -quality 80 with the standard tables, and IMHO looks better, esp. if you enlarge the image.

Neither this or the previous scans specification are finely tuned: They are the best I can cook in a few hours, no more and no less.
Last edited by NicolasRobidoux on 2012-02-26T23:12:29-07:00, edited 1 time in total.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

Here is a first pass at a higher quality (roughly the same file size as -quality 89) set for -sample 2x2:

Code: Select all

# Nicolas Robidoux's progressive encoding v2012.02.27 
# for -sample 2x2 (chroma values too large for 1x1)
0: 0  0  0 1;
1: 0  0  0 1;
2: 0  0  0 1;
0: 1  1  0 1;
0: 2  2  0 1;
0: 3  9  0 2;
1: 1  5  0 1;
2: 1  9  0 1;
1: 6  9  0 3;
0: 10 20 0 2;
1: 10 14 0 4;
2: 10 20 0 2;
0: 21 42 0 3;
1: 15 35 0 5;
2: 21 35 0 3;
0: 43 48 0 4;
2: 36 63 0 4;
1: 36 63 0 6;
0: 49 63 0 5;
Again, not a final version, but something that gives an idea of what tricks the mutt can do.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

... and here is a first pass at a set which works with -sampling-factor 1x1 and produces file with roughly the same size (with 1x1) as -quality 94 with the stock tables:

Code: Select all

# Nicolas Robidoux's progressive encoding v2012.02.27 
# for -sample 1x1 (chroma values too numerous for 2x2)
0: 0  0  0 1;
1 2: 0  0  0 1;
0: 1  1  0 1;
0: 2  2  0 1;
0: 3  20 0 1;
0: 21 48 0 2;
0: 49 53 0 3;
0: 54 57 0 4;
0: 58 60 0 5;
0: 61 63 0 6;
2: 1  9  0 1;
2: 10 27 0 2;
2: 28 35 0 3;
2: 36 42 0 4;
2: 43 48 0 5;
2: 49 63 0 6;
1: 1  5  0 1;
1: 6  9  0 2;
1: 10 14 0 3;
1: 15 20 0 4;
1: 21 27 0 5;
1: 28 35 0 6;
1: 36 63 0 7;
Again, these are first passes, but the results are quite encouraging, esp. if one expects the images to be often viewed at 2x magnification at the receiving end. (In final versions I'd reorder entries for more pleasant progressive decoding, and I certainly would test the fade rate to 7-bit shift more carefully. This is done very quickly, based on my experience of where "the important DCT coefficients" are for each of the three bands.)
Last edited by NicolasRobidoux on 2012-02-28T06:02:55-07:00, edited 1 time in total.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

Here is one that works really well with -sample 1x1 and which produces files with about the same size as the stock tables with -sample 2x2 at quality 70:

Code: Select all

# Nicolas Robidoux's progressive encoding v2012.02.27.14 
# for -sample 1x1 (too many skipped chroma values for 2x2)
0: 0  0  0 3;
1 2: 0  0  0 3;
0: 1  1  0 3;
0: 2  2  0 3;
2: 1  2  0 3;
0: 3  20 0 3;
1: 1  2  0 4;
0: 21 35 0 4;
2: 4  4  0 5;
0: 36 42 0 5;
1: 4  4  0 6;
Note that the above script skips one entry in each of the chroma scans, in addition to chopping off the tail in all three channels. The following scan is higher quality, and skips several entries in the luma (in the tail: entries 44 to 47) in addition to skipping entry 3 of both chroma channels:

Code: Select all

# Nicolas Robidoux's progressive encoding v2012.02.27.16 
# for -sample 1x1 (chroma values too large for 2x2)
0: 0  0  0 3;
1 2: 0  0  0 3;
0: 1  1  0 2;
0: 2  2  0 2;
2: 1  2  0 3;
0: 3  9  0 2;
0: 10 27 0 3;
1: 1  2  0 4;
0: 28 35 0 4;
2: 4  4  0 5;
0: 36 43 0 5;
0: 48 48 0 5;
1: 4  4  0 6;
2: 3  3  0 6;
2: 5  5  0 6;
Last edited by NicolasRobidoux on 2012-02-28T06:05:11-07:00, edited 1 time in total.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

I'm going to move this material to another thread, because my tricks really work. (No stupid pet trick here.)

For example, here is a progressive scan spec tuned for -sample 2x2, which produces files about the same size as the standard set up (with -sample 2x2) at quality about 25:

Code: Select all

# Nicolas Robidoux's progressive encoding v2012.02.27.16.23
# for -sample 2x2 (slightly wasteful for 1x1)
0: 0 0 0 4;
1 2: 0 0 0 3;
0: 1 9 0 4;
2: 1 5 0 4;
1: 1 9 0 5;
0: 10 27 0 5;
2: 6 9 0 5;
0: 28 63 0 6;
Unless you are dead set on detail preservation, it's simply no contest. (Again, it's not fully optimized: It took me about half an hour to put together given what I now understand. I did not put time into tweaking it. Like all the above, it's meant to be used with a quantization matrix of ones.)

Masters supervision is calling. Back to this later.
Last edited by NicolasRobidoux on 2012-03-05T09:25:42-07:00, edited 1 time in total.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

Finally, a mature version of the progressive scan script:

Code: Select all

# Nicolas Robidoux's JPEG progressive encoding v2012.02.28
#
# To be used with cjpeg -quality 100 and a single quantization table
# filled with ones, e.g. with
#
#   cjpeg -scans ThisFile.txt \
#         -qtables SixtyFourOnes.txt \
#         -qslots 0 -quality 100 -sample 1x1 \
#         -dct float -baseline -optimize \
#         -outfile OutputImage.jpg InputImage.ppm
#
# The chroma entries are meant for -sample 1x1 (too many skipped
# chroma values for 2x2).
#
# This progressive scan script was optimized for medium to large
# images. Thumbnails are another story.
#
# In terms of file size, the results correspond more or less to what's
# produced with cjpeg -sample 1x1 -quality 70 with the standard
# quantization tables. However, there are fewer JPEG artifacts. This
# is particularly obvious if the image is zoomed in: enlargements look
# noticeably "cleaner" than with the standard qtables and without
# progressive encoding.
#
# 1x1 subsampling was chosen over 2x2 subsampling because, with
# -sample 2x2, sharp interfaces occasionally produce dreadful colour
# artifacts pretty much irregardless of the quality of the chroma
# tracking.
#
# In addition, the emphasis is on looking good, not on preserving
# small detail.
#
# If you want the resulting files to be as small as they can be
# without losing quality, a matching version of Jason Garrett-Glaser's
# (Dark Shikari) jpegrescan PERL script should be used to
# systematically compare variants of this progressive encoding
# specification.
#
# The author (Nicolas Robidoux) consults for a living and can dial
# specific artifact mixes in a controlled manner, customize
# jpegrescan, etc.
#
# Email: FiRsTnAmE dOt LaStNaMe HaT gMaIl DoT cOm.
#
# Suggested improvements and comments are welcome.
#
# This progressive encoding script is public domain.
0: 0  0  0 3;
1: 0  0  0 4;
2: 0  0  0 3;
0: 1  1  0 2;
0: 2  2  0 2;
2: 1  5  0 2;
0: 3  27 0 3;
2: 6  9  0 3;
1: 1  2  0 4;
0: 28 42 0 4;
1: 3  5  0 5;
0: 43 63 0 5;
1: 6  63 0 6;
2: 10 63 0 6;
There may be a few bits I can take out here and there (in particular, I think I may have left too much chroma in there, Cb (blue) in particular, and I did not drop any mode, like it's done in medical ultrasound), but this is about as good as I can make something which has about the same size as the standard -sample 1x1 at quality 70.

On the other hand, one could argue that this could use more luma fine detail. P.S. Just figured a more sophisticated way of using progressive encoding. Update coming up!
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

Here is a gorgeous one, which produces files about the same size as -sample 2x2 -quality 76 to 79:

Code: Select all

# Nicolas Robidoux's JPEG progressive encoding v2012.03.04
#
# To be used with cjpeg -quality 100 and a single quantization table
# filled with ones, e.g. with
#
#   cjpeg -scans ThisFile.txt \
#         -qtables SixtyFourOnes.txt \
#         -qslots 0 -quality 100 -sample 2x2 \
#         -dct float -baseline -optimize \
#         -outfile OutputImage.jpg InputImage.ppm
#
# The chroma entries are meant for -sample 2x2 (too much chroma
# chroma values for 1x1).
#
# This progressive scan script was optimized for medium to large
# images. Thumbnails are another story.
#
# In terms of file size, the results correspond more or less to what's
# produced with cjpeg -sample 2x2 -quality 76 to 79 with the standard
# quantization tables. However, there are much fewer JPEG
# artifacts. This is particularly obvious if the image is zoomed in:
# enlargements look noticeably "cleaner" than with the standard
# qtables and without progressive encoding.
#
# If you want the resulting files to be as small as they can be
# without losing quality, a matching version of Jason Garrett-Glaser's
# (Dark Shikari) jpegrescan PERL script should be used to
# systematically compare variants of this progressive encoding
# specification. Complex progressive encoding is less effective with
# very small images.
#
# The author (Nicolas Robidoux) consults for a living and can dial
# specific artifact mixes in a controlled manner, customize
# jpegrescan, etc.
#
# Email: FiRsTnAmE dOt LaStNaMe HaT gMaIl DoT cOm.
#
# Suggested improvements and comments are welcome.
#
# This progressive encoding script is public domain.
0: 0 0 0 3;
1 2: 0 0 0 3;
0: 1 63 0 5;
2: 1 63 0 5;
1: 1 63 0 5;
0: 1 53 5 4;
2: 1 14 5 4;
1: 1 14 5 4;
0: 1 27 4 3;
2: 1 9  4 3;
1: 1 9  4 3;
0: 1 9  3 2;
Another gorgeous one, which produces files about the same size as -sample 2x2 -quality 51 to 55:

Code: Select all

# Nicolas Robidoux's JPEG progressive encoding v2012.03.04.23
#
# To be used with cjpeg -quality 100 and a single quantization table
# filled with ones, e.g. with
#
#   cjpeg -scans ThisFile.txt \
#         -qtables SixtyFourOnes.txt \
#         -qslots 0 -quality 100 -sample 1x1 \
#         -dct float -baseline -optimize \
#         -outfile OutputImage.jpg InputImage.ppm
#
# The chroma entries are meant for -sample 2x2 (too much chroma
# chroma values for 1x1).
#
# This progressive scan script was optimized for medium to large
# images. Thumbnails are another story.
#
# In terms of file size, the results correspond more or less to what's
# produced with cjpeg -sample 2x2 -quality 51 to 55 with the standard
# quantization tables. However, there are much fewer JPEG
# artifacts. This is particularly obvious if the image is zoomed in:
# enlargements look noticeably "cleaner" than with the standard
# qtables and without progressive encoding.
#
# If you want the resulting files to be as small as they can be
# without losing quality, a matching version of Jason Garrett-Glaser's
# (Dark Shikari) jpegrescan PERL script should be used to
# systematically compare variants of this progressive encoding
# specification. Complex progressive encoding is less effective with
# very small images.
#
# The author (Nicolas Robidoux) consults for a living and can dial
# specific artifact mixes in a controlled manner, customize
# jpegrescan, etc.
#
# Email: FiRsTnAmE dOt LaStNaMe HaT gMaIl DoT cOm.
#
# Suggested improvements and comments are welcome.
#
# This progressive encoding script is public domain.
0: 0 0 0 4;
1 2: 0 0 0 3;
0: 1 63 0 6;
2: 1 63 0 6;
1: 1 63 0 6;
0: 1 53 6 5;
2: 1 14 6 5;
1: 1 14 6 5;
0: 1 27 5 4;
2: 1 9  5 4;
1: 1 9  5 4;
0: 1 9  4 3;
2: 1 5  4 3;
1: 1 5  4 3;
In this last one, I put more chroma than in the standard version.

Yet another, which produces files corresponding to about -quality 35:

Code: Select all

# Nicolas Robidoux's JPEG progressive encoding v2012.03.05
#
# To be used with cjpeg -quality 100 and a single quantization table
# filled with ones, e.g. with
#
#   cjpeg -scans ThisFile.txt \
#         -qtables SixtyFourOnes.txt \
#         -qslots 0 -quality 100 -sample 1x1 \
#         -dct float -baseline -optimize \
#         -outfile OutputImage.jpg InputImage.ppm
#
# The chroma entries are meant for -sample 2x2 (too much chroma
# chroma values for 1x1).
#
# This progressive scan script was optimized for medium to large
# images. Thumbnails are another story.
#
# In terms of file size, the results correspond more or less to what's
# produced with cjpeg -sample 2x2 -quality 35 with the standard
# quantization tables. However, there are much fewer JPEG
# artifacts. This is particularly obvious if the image is zoomed in:
# enlargements look noticeably "cleaner" than with the standard
# qtables and without progressive encoding.
#
# If you want the resulting files to be as small as they can be
# without losing quality, a matching version of Jason Garrett-Glaser's
# (Dark Shikari) jpegrescan PERL script should be used to
# systematically compare variants of this progressive encoding
# specification. Complex progressive encoding is less effective with
# very small images.
#
# The author (Nicolas Robidoux) consults for a living and can dial
# specific artifact mixes in a controlled manner, customize
# jpegrescan, etc.
#
# Email: FiRsTnAmE dOt LaStNaMe HaT gMaIl DoT cOm.
#
# Suggested improvements and comments are welcome.
#
# This progressive encoding script is public domain.
0: 0 0 0 4;
1 2: 0 0 0 3;
0: 1 63 0 7;
2: 1 63 0 7;
1: 1 63 0 7;
0: 1 53 7 6;
2: 1 14 7 6;
1: 1 14 7 6;
0: 1 27 6 5;
2: 1 9  6 5;
1: 1 9  6 5;
0: 1 9  5 4;
2: 1 5  5 4;
1: 1 5  5 4;
0: 1 5  4 3;
2: 1 2  4 3;
1: 1 2  4 3;
All three progressive scan specs are are fully useable (even though they have not been maximally tweaked). I think that it is fair to say that they go some way toward making you forget that you are using JPEG.

NOTE: Progressive scanning does not compress thumbnails as much as larger images.
Last edited by NicolasRobidoux on 2012-03-05T09:26:14-07:00, edited 1 time in total.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

Here is another one that has not been carefully tested, just written down in one go from what I believe I now understand about the whole JPEG thing, which produces files about the same size as the standard tables with -sample 2x2 -quality 18:

Code: Select all

#####################################################################
# Nicolas Robidoux's JPEG progressive encoding v2012.03.05.11
#
# To be used with cjpeg -quality 100 and a single quantization table
# filled with ones, e.g. with
# 
#   cjpeg -scans ThisFile.txt \
#         -qtables SixtyFourOnes.txt \
#         -qslots 0 -quality 100 -sample 1x1 \
#         -dct float -baseline -optimize \
#         -outfile OutputImage.jpg InputImage.ppm
#
# The chroma entries are meant for -sample 2x2 (too many chroma bits
# for 1x1).
#
# This progressive scan script was optimized for reasonably large
# images. Thumbnails are an altogether different story.
#
# If you want the resulting files to be as small as they can be
# without losing quality, a matching version of Jason Garrett-Glaser's
# (Dark Shikari) jpegrescan PERL script should be used to
# systematically compare variants of this progressive encoding
# specification.
#
# The author (Nicolas Robidoux) consults for a living and can dial
# specific artifact mixes in a controlled manner, customize
# jpegrescan, etc.
#
# Email: FiRsTnAmE dOt LaStNaMe HaT gMaIl DoT cOm.
#
# Suggested improvements and comments are welcome.
#
# This progressive encoding script is public domain.
######################################################################
0: 0 0 0 5;
1 2: 0 0 0 4;
0: 1 53 0 7;
2: 1 14 0 7;
1: 1 14 0 7;
0: 1 27 7 6;
2: 1 9  7 6;
1: 1 9  7 6;
0: 1 9  6 5;
2: 1 5  6 5;
1: 1 5  6 5;
0: 1 5  5 4;
2: 1 2  5 4;
1: 1 2  5 4;
(I can't even carefully check the results: Ophtalmologist dilated my pupils this morning and I'm half blind right now. But I'm really excited about this is turning out, so here is. And this really shows what comes out of my head, unadulterated by visual checks and tweaking ;-))
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

I'm going to have to look into the JPEG source code: Using a qtable of all ones may be pushing it, and I may have to have something like a qtable of all fours, in order to avoid smoothing introduced by some DCTs topping or bottoming out. Specifically, I'll have to check whether the coefficients are clamped (it would seem that they should).
Last edited by NicolasRobidoux on 2012-03-07T05:51:44-07:00, edited 1 time in total.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

Indeed. The fix is easy, but makes files larger. More later.
NicolasRobidoux
Posts: 1944
Joined: 2010-08-28T11:16:00-07:00
Authentication code: 8675308
Location: Montreal, Canada

Re: stupid pet trick?: JPEG compression with qtable of ones

Post by NicolasRobidoux »

Actually, the fix is probably unnecessary: There seems to be a cjpeg mechanism that handles "possibly out of range" DCT coefficients elegantly.
Post Reply