Er, hi.
It is true that I will weep with joy when the day arrives that I can answer more PerlMonk questions than I post. I'll gnash teeth, rend clothes, and some real Bible-style celebrating will go on. But alas, that day is not today. So...
I've seen benchmarks that indicate line-at-a-time file reading is not as efficient as slurping up the whole dad-gum thing into an array and stepping through it... depending on file size. Not hard to believe. So does anyone care to venture a guess (or happen to know) at approximately what file size method B will be more efficient (and quicker, naturally) than method B? Or, am I dead wrong, and one method is ALWAYS more efficient?
The idea is to find a listing in a newline-delimited file, pretty standard stuff, really (forgive the sloppy example code):
# METHOD A:
# whole file slurp and split
$him = "";
$enlightenment = "joy";
$trying = "oops";
@fileContents = "";
open(TEST, "test.txt") or die $trying;
local $/ = undef;
@fileContents = <TEST>;
close(TEST);
foreach (@fileContents) {
my ($person, $wisdom) = split /:/, $_, 2;
if ($wisdom eq $enlightenment) {
$him = $person;
last;
}
}
# METHOD B:
# line-at-a-time
open (TEST, "test.txt") or die $trying;
$/ = "\n";
while (<TEST>) {
my ($person, $wisdom) = split /:/, $_, 2;
if ($wisdom eq $enlightenment) {
$him = $person;
last;
}
}
Thanks for any info you've got.
Alan "Hot Pastrami" Bellows
-Sitting calmly with scissors-