Theoretical Computer Science
it.information-theory data-streams
Updated Thu, 16 Jun 2022 05:10:45 GMT

Which is the limit of lossless compression data? (if there exists such a limit)


Lately I've been dealing with compression-related algorithms, and I was wondering which is the best compression ratio that can be achievable by lossless data compression.

So far, the only source I could find on this topic was the Wikipedia:

Lossless compression of digitized data such as video, digitized film, and audio preserves all the information, but can rarely do much better than 1:2 compression because of the intrinsic entropy of the data.

Unfortunately, Wikipedia's article doesn't contain a reference or citation to support this claim. I'm not a data-compression expert, so I'd appreciate any information you can provide on this subject, or if you could point me to a more reliable source than Wikipedia.




Solution

I am not sure if anyone has yet explained why the magical number seems to be exactly 1:2 and not, for example, 1:1.1 or 1:20.

One reason is that in many typical cases almost half of the digitised data is noise, and noise (by definition) cannot be compressed.

I did a very simple experiment:

  • I took a grey card. To a human eye, it looks like a plain, neutral piece of grey cardboard. In particular, there is no information.

  • And then I took a normal scanner exactly the kind of device that people might use to digitise their photos.

  • I scanned the grey card. (Actually, I scanned the grey card together with a postcard. The postcard was there for sanity-checking so that I could make sure the scanner software does not do anything strange, such as automatically add contrast when it sees the featureless grey card.)

  • I cropped a 1000x1000 pixel part of the grey card, and converted it to greyscale (8 bits per pixel).

What we have now should be a fairly good example of what happens when you study a featureless part of a scanned black & white photo, for example, clear sky. In principle, there should be exactly nothing to see.

However, with a larger magnification, it actually looks like this:

30x30 crop, magnified by factor 10

There is no clearly visible pattern, but it does not have a uniform grey colour. Part of it is most likely caused by the imperfections of the grey card, but I would assume that most of it is simply noise produced by the scanner (thermal noise in the sensor cell, amplifier, A/D converter, etc.). Looks pretty much like Gaussian noise; here is the histogram (in logarithmic scale):

histogram

Now if we assume that each pixel has its shade picked i.i.d. from this distribution, how much entropy do we have? My Python script told me that we have as much as 3.3 bits of entropy per pixel. And that's a lot of noise.

If this really was the case, it would imply that no matter which compression algorithm we use, the 1000x1000 pixel bitmap would be compressed, in the best case, into a 412500-byte file. And what happens in practice: I got a 432018-byte PNG file, pretty close.


If we over-generalise slightly, it seems that no matter which black & white photos I scan with this scanner, I will get the sum of the following:

  • "useful" information (if any),
  • noise, approx. 3 bits per pixel.

Now even if your compression algorithm squeezes the useful information into << 1 bits per pixel, you will still have as much as 3 bits per pixel of incompressible noise. And the uncompressed version is 8 bits per pixel. So the compression ratio will be in the ballpark of 1:2, no matter what you do.


Another example, with an attempt to find over-idealised conditions:

  • A modern DSLR camera, using the lowest sensitivity setting (least noise).
  • An out-of-focus shot of a grey card (even if there was some visible information in the grey card, it would be blurred away).
  • Conversion of the RAW file into a 8-bit greyscale image, without adding any contrast. I used typical settings in a commercial RAW converter. The converter tries to reduce noise by default. Moreover, we are saving the end result as an 8-bit file we are, in essence, throwing away the lowest-order bits of the raw sensor readings!

And what was the end result? It looks much better than what I got from the scanner; the noise is less pronounced, and there is exactly nothing to be seen. Nevertheless, the Gaussian noise is there:

30x30 crop, magnified by factor 10 histogram

And the entropy? 2.7 bits per pixel. File size in practice? 344923 bytes for 1M pixels. In a truly best-case scenario, with some cheating, we pushed the compression ratio to 1:3.


Of course all of this has exactly nothing to do with TCS research, but I think it is good to keep in mind what really limits the compression of real-world digitised data. Advances in the design of fancier compression algorithms and raw CPU power is not going to help; if you want to save all the noise losslessly, you cannot do much better than 1:2.





Comments (5)

  • +3 – cool! if the noise is gaussian, my feeling is that projecting on the first k singular vectors (or a similar more fancy technique) would remove a lot of the noise. a quick google scholar search revealed an article by M. Elad and M. Aharon, that uses the projection method + some Bayesian stats trickery: ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=4011956. supposedly, in 2006 it was "state of the art". of course, it's not lossless, but Jukka's data shows that if you insist on small size you need to lose at least the noise. — Jun 18, 2011 at 23:37  
  • +0 – Your examples are only about lossless compression of images. I will reluctantly grant you their generalization to any data coming from physical sensors (sound, image, video, yet probably with a distinct factor) but there are (many?) other fields where compression is applied, with a much better ratio than 1:2 (natural language comes to mind), because there is less noise. — Jun 19, 2011 at 04:14  
  • +2 – @Jukka: +1: Beautiful experiment! @Sasho: for medical images, the conventional wisdom is that you can't lose anything, even if it's very likely just noise. — Jun 19, 2011 at 14:18  
  • +2 – Very nice and clear explanation! — Jun 19, 2011 at 20:50  
  • +2 – One more comment: this is really unavoidable for medical images. If you don't use enough precision to have a substantial amount of this noise in medical images, then you're probably losing some actual relevant detail, which you would really want to keep. — Jun 22, 2011 at 12:23  


External Links

External links referenced by this document: