http://qs1969.pair.com?node_id=1036737

go9kata has asked for the wisdom of the Perl Monks concerning the following question:

Big text file, more than 2Mil lines, need to be parsed. The file has 11 fields each might have variable size and it is TAB separated

<field01><TAB><field02><TAB>...<field11><NEWLINE>

After writing a small code to read the file line by line, I noticed that using SPLIT function on each line cause huge time delay. The split function is a very good tool but in this case I feel like it is not the proper thing to use. How can I optimise TAB delimited file reading?

Here is an example code snippets and some time measures

#!/usr/bin/perl use warnings; use strict; my $file = "/Path/to/file/file_X.txt"; my $line_count = 0; open(my $fh, '<', $file) or die("Can not open file $file to read!\n"); while( !eof($fh) ) { # define line my $qry_line = <$fh>; $line_count++; # remove new line charc chomp($qry_line); } close($fh); print "\n# of lines: " . $line_count . "\n";

With the above code I measure the minimum time I can have by reading the file line by line, and it is:

# of lines: 2620024

real 0m1.379s

user 0m1.162s

sys 0m0.189s

Now if I add a SPLIT function to parse each field in a variable I get a substantial delay. The following line is added after the chomp function.

# split line to fields my ($a,$b,$c,$d,$e,$f,$g,$h,$i,$j,$k) = split("\t", $qry_line, 11);

After running the above code with the SPLIT function on the same file I got:

# of lines: 2620024

real 0m9.501s

user 0m9.260s

sys 0m0.239s

Now by parsing the line with SPLIT I add more than 8sec of execution time, which is quite a lot. Do you guys have any suggestions to have this as much optimised as possible? The above code is suppose to run over 200+ such files, so those 8sec that SPLIT adds kind of make a difference. Thank you in advance for any suggestions!

Similar discussion: http://arstechnica.com/civis/viewtopic.php?f=20&t=378424