The harddrive I/O for sure is an issue, but it shouldn't cause the script to be slower. If anything, at least it should be able to perform as well.
Sorry, but that simply is not the case.
The program below reads two large files:
02/02/2015 15:42 10,737,418,241 big.csv
02/02/2015 15:47 12,300,000,001 big.tsv
First concurrently and then sequentially. The timings after the __END__ token show that reading them concurrently takes 5 times longer than reading them sequentially.
#! perl -slw
use strict;
use threads;
use Time::HiRes qw[ sleep time ];
sub worker {
my( $file, $start ) = @_;
open my $in, '<', $file or die $!;
sleep 0.0001 while time() < $start;
my $count = 0;
++$count while <$in>;
my $stop = time;
return sprintf "$file:[%u] %.9f", $count, $stop - $start;
}
my $start = time + 1;
my @workers = map threads->create( \&worker, $_, $start ), @ARGV;
print $_->join for @workers;
for my $file (@ARGV) {
open my $in, '<', $file or die $!;
my( $start, $count ) = ( time(), 0 );
++$count while <$in>;
printf "$file:[%u] %.9f\n", $count, time()-$start;
}
__END__
[15:49:22.32] E:\test>c:piotest.pl big.csv big.tsv
big.csv:[167772161] 407.047676086
big.tsv:[100000001] 417.717574120
big.csv:[167772161] 82.103285074
big.tsv:[100000001] 81.984734058
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
|