in reply to Re: Read multiple text file from bz2 without extract first
in thread Read multiple text file from bz2 without extract first

For a huge bz2, is decompress or read it directly into a text file more efficient in term of speed and CPU usage?
  • Comment on Re^2: Read multiple text file from bz2 without extract first

Replies are listed 'Best First'.
Re^3: Read multiple text file from bz2 without extract first
by mbethke (Hermit) on Mar 27, 2012 at 07:09 UTC

    To clarify, I guess bms might have been misunderstood:

    A bz2 file is not called an "archive" exactly because it cannot contain more than one file. bzip2 (like compress, gzip and lzma) can only compress a single file, the archiving of several files into such a compressed file is usually done using tar which in turn cannot compress. This is different from programs like zip, lha or rar that do the archiving and compression all in one. The idea of the Unix-style approach is that any of the compressors can be used for other things than compressing archives too (like in a pipe to compress network transfers) while when you're archiving you can combine tar with any of these compressors for different speed/compression tradeoffs.

    Now, do you have a tar.bz2 archive with texts that you want to read or are the texts individually compressed? I suppose it's the former, so you could use Archive::Tar that transparently decompresses compressed tar archives and lets you read individual entries.

    Regarding efficiency, it depends. Unlike zip-style archivers that have a table of contents at the end, you have to read a tar archive completely to get the contents. If it contains two files of a gigabyte each, you have to decompress the full two gigs just to get the names, and then again to get the contents. Then it might be worth decompressing it to disk first. If you know the names or know that you need everything though, decompressing on the fly will usually be faster. If the archive is not the British National Corpus or worse, it probably doesn't matter :)

      Thanks for clarifying, I got a bit swirly there. Was thinking of something else.

      So if let say I have test.bz2 which contain test.txt which is 1 GB in size, extract test.txt to disk and then process, or directly read test.txt into another text file without extraction consume the same amount of time?

        Why don't you compare the times yourself? The time depends on what is faster, decompressing and reading (CPU), or decompressing+writing+reading (IO). It also depends on whether you need to process the file more than once.

        From Perl, you can directly decompress and read by using the pipe-open:

        open my $fh, "bzip -cd $file |" or die "Couldn't open '$file': $!";

        That is efficient if you only need to read the data once. If you need to read it more than once and have the disk space needed, decompressing once and then reading the decompressed file is likely faster.

        I would think that decompressing while reading would be faster, but as Corion said, it depends. Usually bzip2 compresses text files very well so the IO load is much less if you don't write the decompressed text back to disk. If however you need to read the file several times or seek around in it, it may be worth writing it to disk. A gigabyte of text on a modern machine has a good chance of staying largely in the file system cache so reading it again is mostly at RAM speed.