in reply to Re^8: Serializing a large object
in thread Serializing a large object
Does it makes sense to compress the store files?
Yes & no. :(
I generated a random set of 3,000 overlaps--positive & negative--with a maximum range of 10,000.
The nstore'd file on disk was: 26/09/2010 15:26 60,783,878 fred.bin.
gzipping that resulted in: 26/09/2010 15:26 423,984 fred.bin.gz.
It'll certainly save you large amounts of disk space. But that's not your aim.
The problem is that whilst you save time reading from disk. You send time decompressing. And in the end, much of the time spent retrieve()ing the data, is the time required to allocate the memory and reconstruct it.
It would certainly be worth you investigating the idea with your real-world datasets; and it will absolutely save huge amounts of disk space. But whether it will actually load faster will depend on many factors; you'll have to try for yourself with real data.
|
|---|