Significantly faster is on the order of a few tens of seconds if run several times a day over a long period, or about half an hour if run just once. For almost all practical use cases the trivial difference you demonstrate is just that - trivial. A, maybe useful, little extra juice can be squeezed out by stopping the split early rather than just slicing the result to avoid copying a few extra list elements:
use strict;
use warnings;
use Benchmark qw(cmpthese);
my $kFName = 'delme.txt';
test();
sub test {
my $entry = 'a' x 18;
open my $fOut, '>', $kFName or die "Can't create $kFName: $!\n";
print $fOut "$entry\t" x 19, "\n" for 1 .. 10000;
close $fOut;
cmpthese(
-5,
{
splitAll => sub {splitAll()},
splitLimit => sub {splitLimit()},
splitSlice => sub {splitSlice()},
}
);
}
sub splitAll {
open my $fIn, '<', $kFName or die "Can't open $kFName: $!\n";
while (<$fIn>) {
my @columns = split /\t/;
}
close $fIn;
}
sub splitSlice {
open my $fIn, '<', $kFName or die "Can't open $kFName: $!\n";
while (<$fIn>) {
my @columns = (split /\t/)[1 .. 2];
}
close $fIn;
}
sub splitLimit {
open my $fIn, '<', $kFName or die "Can't open $kFName: $!\n";
while (<$fIn>) {
my @columns = (split /\t/, $_, 4)[1 ..2];
}
close $fIn;
}
Prints:
Rate splitAll splitSlice splitLimit
splitAll 5.60/s -- -36% -73%
splitSlice 8.75/s 56% -- -59%
splitLimit 21.1/s 276% 141% --
However, even the worst performing variant is still so fast that it simple not worth worrying about even if you were running it several thousand times a day every day of the year. And not of these solutions is actually useful for parsing CSV. To do that in a reasonably robust way you should really use something like Text:CSV, which is about ten times slower than any of the benchmarked solutions, but has the huge advantage that it may actually give correct results for anything other than the trivial test data used by this test.
True laziness is hard work
|