If you have at least one idle CPU, using open "zcat $file |" will allow you to spread the load across two CPUs, one for the perl process and one for the zcat process. That way, all the CPU cycles you need for decompressing the data get offloaded from perl. This should be at least as fast as using [mod://PerlIO::via::gzip</c> if not faster, as usually transferring the data between processes is fairly fast.
In reply to Re^2: Seeking through a large gzipped file
by Corion
in thread Seeking through a large gzipped file
by roysperlarnab
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |