Here I want to clarify my question. CAN anyone help me to try to make the code work quickly?
I've tried a plethora of techniques, and it seems nothing works. I have 27 folders each with 12 files in them that I want to process (600-2500 KB, most ~1500 KB). Now, I've stripped the code down to just reading in the file and running a few lines of code (details at aforementioned node)
(I've read the file into @lines)
for my $curline(@lines) { next unless (reverse $curline) =~ /^\s*([05])/; $zeroat[$i++] = $ln_num if $1 == 0; $ln_num++; } for $i(@zeroat) { $lines[$i] =~ /^([0-9]+.?[0-9]*)\t.*([05])\s*$/; if ($1 > .5 && $2 == 0) { splice @lines,$i,1; @lines1 = @lines; @lines2 = splice @lines1,$i,$#lines1-$i+1; } }
The code aside, though, it seems that just reading in these files takes too long. What can I do (do I need a faster computer, read line by line, optimize code, etc.)?
In reply to Processing large files many times over by dimmesdale
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |