The former builds a frelling huge temporary list on the stack and then copies that into @array in one swell foop, while the later pushes one line at a time.
Of course a better question might be: why are you loading an entire 1.6G file into RAM? It's more efficient to process things a line at a time (or maybe a record at a time, depending on the structure) if at all possible rather than slurping it all in at once. There may also be things you can do such as writing to a Berkeley DB file or an RDBMS and then process using that instead.
But that would take more information about exactly what you're trying to do with your 1.6G.
In reply to Re: Large file efficiency
by Fletch
in thread Large file efficiency
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |