in reply to Re^3: Dynamically Updating Frequency Analysis
in thread Dynamically Updating Frequency Analysis
Let's say we have a file that only contains lowercase letters (a-z) and space and newline. This means that we have 28 symbols. We choose to encode each symbol in 5 bits. This gives us a compression of 3/8. Now setting aside variable length encodings (Huffman, LZW, etc) - can we improve the compression?
Well, 2^5 = 32 and we are only using 28 so can we use the 4 left over sequences for anything? I know, let's examine the frequency of N-character sequences and pick the 4 that give me the greatest reduction in the over all file size (set aside that there will need to be a dictionary that explains how to expand the extra 4 sequences and a way to identify that the dictionary has ended and the 5 bit file has began). For memory reasons, we determine that N can't be arbitrarily long - we can only go up to sequences of 4 characters. We then have a lookup table of every 2, 3 and 4 character sequence with the corresponding frequency count.
Assuming you have followed me this far, I can now explain the problem. Let's say the sequence 'here' appears 40 times (160 bytes in the input) and will have the greatest reduction in the output (25 bytes setting aside the dictionary). We go ahead and make that substitution. The second we decide to pick that one, the frequencies of some others now must be altered because they shared substrings of what we just replaced.
Does that make sense?
Cheers - L~R
|
---|
Replies are listed 'Best First'. | |
---|---|
Re^5: Dynamically Updating Frequency Analysis
by Voronich (Hermit) on May 08, 2013 at 16:26 UTC |