Plus, the line @file = <FILE>; puts the entire file into RAM. A 22MB file translates to, probably, 100MB - 200MB in RAM. Add the overhead of Perl and anything else you're doing and you could be using 300MB+ of RAM. If you're doing this on a smaller machine, you could be swapping memory like crazy, which will kill your program's runtime. (And, it might even kill the machine.)
A much better idea would be to pre-process the file, maybe by loading it into a database. Then, do your work against that. Another idea would be to sort the file. That way, you can short-circuit your processing.
The important thing is that you have a program that works. Optimizing for speed once you have accuracy down is good. Optimizing for accuracy once you have speed down is ... harder.
Being right, does not endow the right to be rude; politeness costs nothing.
Being unknowing, is not the same as being stupid.
Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence.
Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.
In reply to Re: inefficient code? works on small files ok but not larger ones
by dragonchild
in thread inefficient code? works on small files ok but not larger ones
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |