in reply to Parsing a large file 80GB .gz file and making an output file with specifice columns in this original file.
Where is the bottleneck in your program? The parsing code you've posted could probably be optimized, but your problem may well be IO/compression related (and that, too, is just a possibility).
You need to profile your code to find out which subs/blocks are consuming the most time. My preference is Devel::NYTProf, which can produce very detailed HTML reports. From its SYNOPSIS:
# profile code and write database to ./nytprof.out perl -d:NYTProf some_perl.pl # convert database into a set of html files, e.g., ./nytprof/index.h +tml # and open a web browser on the nytprof/index.html file nytprofhtml --open
|
|---|