in reply to Storing large data structures on disk
That store/nstore is slower than printing is easy to explain: store has to cope with arbitrary data while you with intimate knowledge about the data structure knew that there would be just columns of integers to process
That knowledge is also your biggest advantage. You know the data, you know the access you need.
If for example all numbers are below 256, each number can be stored in 1 byte. If the array is sparse (i.e. has mostly 0s), you could store only numbers other than 0 and their position. Or if numbers often are repeated, compress them to a count and the number. A compression rate of 100 suggests either one of these or repeated occurences of sequences of numbers. In that case a compression algorithm like Compress::Bzip2 should get good results
What compression did you use with freeze? There seems to be no indication in the documentation of Storable that freeze offers any sophisticated compression.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Storing large data structures on disk
by roibrodo (Sexton) on May 31, 2010 at 17:11 UTC | |
by jethro (Monsignor) on May 31, 2010 at 23:05 UTC |