in reply to Large file efficiency
Leaving aside why you are loading the entire file into memory, some algorithms do require that.
Based purely upon observation of my system's behaviour
First the data is placed on Perl's stack. Then the array is allocated and the data is copied into it. Then the stack is freed.
The stack only ever holds one line at a time. The array will be grown in stages, with copying required, but ultimately it uses less.
The upshot on my system is that loading a 1 million line/10 MB file using method 1 requires nearly 9 seconds and 125 MB of ram; whereas using method 2 requires under 1.5 seconds and 47 MB of ram.
Not definitive, and if your algorithm requires it, it's worth running a simple test on your own system for confirmation, but is seems the latter method has no downsides to me.
|
|---|