If you attempt to compress already compressed and/or encrypted data, you should not get significant further compression. If you do then your original encryption or compression was of pretty poor quality. But if you take realistic text (eg program code) and use popular compression algorithms, you will reliably get significant compression. (Which is why people use them in the first place.)
Therefore by taking some text and attempting to compress it, you can tell normal data from compressed or encrypted data. For instance you might say that if you can reduce its length by 20% or more, it is normal text. Aside from a few short text sequences, you are unlikely to go wrong with a test like this.
There is, however, no way upon casual inspection to distinguish compressed data from encrypted data from white noise. The reasons for this involve information theory.
In reply to Re (tilly) 3: What data is code?
by tilly
in thread What data is code?
by Beatnik
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |