Do you know where your variables are? | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Big text file, more than 2Mil lines, need to be parsed. The file has 11 fields each might have variable size and it is TAB separated <field01><TAB><field02><TAB>...<field11><NEWLINE> After writing a small code to read the file line by line, I noticed that using SPLIT function on each line cause huge time delay. The split function is a very good tool but in this case I feel like it is not the proper thing to use. How can I optimise TAB delimited file reading? Here is an example code snippets and some time measures
With the above code I measure the minimum time I can have by reading the file line by line, and it is: # of lines: 2620024 real 0m1.379s user 0m1.162s sys 0m0.189s Now if I add a SPLIT function to parse each field in a variable I get a substantial delay. The following line is added after the chomp function.
After running the above code with the SPLIT function on the same file I got: # of lines: 2620024 real 0m9.501s user 0m9.260s sys 0m0.239s Now by parsing the line with SPLIT I add more than 8sec of execution time, which is quite a lot. Do you guys have any suggestions to have this as much optimised as possible? The above code is suppose to run over 200+ such files, so those 8sec that SPLIT adds kind of make a difference. Thank you in advance for any suggestions! Similar discussion: http://arstechnica.com/civis/viewtopic.php?f=20&t=378424 In reply to Optimise file line by line parsing, substitute SPLIT by go9kata
|
|