What is actually slower? Checking each whitespace on a line for both conditions or parsing the entire line a third time? I guess it depends on how long the line is compared to how many whitespaces there are. If you had a standard text file where each line started at the margin and ended with a CR/LF or just LF I would say that parsing that line a third time would be slower then checking each whitespace against both conditions but this is just off the hip. I have no benchmarks to prove it and am really just asking in the first place.
Edit:
OK well it appears that japhy's suggestion is considerably faster. I would not have thought that it would make that much of a difference but it does appear to be considerable.
Time taken on brian 42 wallclock secs (41.70 usr 0.23 sys + 0.00 cusr 0.00 csys = 41.93 CPU) seconds
Time taken on japhy 17 wallclock secs (16.52 usr 0.21 sys + 0.00 cusr 0.00 csys = 16.73 CPU) seconds
Here is the code I used to benchmark both these methods.
#!/usr/bin/perl -w
use strict;
use Benchmark;
my $file = shift;
my ($start, $end);
$start = new Benchmark;
brian($file);
$end = new Benchmark;
calc($start, $end, 'brian');
$start = new Benchmark;
japhy($file);
$end = new Benchmark;
calc($start, $end, 'japhy');
sub calc {
my ($start, $end, $test) = @_;
my $diff = timediff($end, $start);
print "Time taken on ", $test, " ",timestr($diff, 'all'), " seconds
+\n";
}
sub brian {
my $file = shift;
open(my $in, '<', $file) or die "error: open $file: $!";
for (1..1000) {
seek($in, 0, 0);
my @lines = grep length(), map { s/#.*//; s/^\s+|\s+$//g; $_ } <
+$in>;
}
close($in);
}
sub japhy {
my $file = shift;
open(my $in, '<', $file) or die "error: open $file: $!";
for (1..1000) {
seek($in, 0, 0);
my @lines = grep length(), map { s/#.*//; s/^\s+//g; s/\s$//g; $
+_ } <$in>;
}
close($in);
}
|