Hi all,
I recently came across a speed problem while reading big files (around 1 million lines). The whole process is taking about 100sec i'd like to know if i can speed it up.
I'm using the code below where :
- mergeLogs is used to retrieve only the relevant lines (76 sec)
- filterLog is used to filter lines (23 sec)
- openLogFile returns a handle on the file (0.2 sec)
- The map/sort/map combo is used to sort the lines by time
Is there any way to speedup the process ?
sub mergeLogs() { my $day = shift; my @files = @_; my @lines; foreach my $file (@files) { my $fh = &openLogFile($file); if (! defined $fh) { warn "$0: ignoring file $file\n"; next; } warn "-> processing $file\n" if $opts{'verbose'} > 0; push @lines, map { &filterLog($_) } grep { /Running|Dump|FromC +B|Update/o } <$fh>; close $fh; } @lines = map { $_->[0] } sort { $a->[1] cmp $b->[1] } map { [ $_, /(\d{2}:\d{2}:\d{2}\.\d{3})/o ] } @lines; return \@lines; } sub filterLog() { my $line = shift; $line =~ s/ {2,}/ /g; $line =~ s/^((?:\S+ ){3}).+?\[?I\]?:/$1/; $line =~ s/ ?: / /g; $line =~ s/ ACK (\w) / $1 ACK /; return unless ! exists $opts{'day'} || /^$opts{'day'}/o; return unless ! exists $opts{'user'} || /[\(\[]\s*(?:$opts{'user'} +)/o; if (exists $opts{'start-time'} || exists $opts{'stop-time'}) { if ($line =~ /(\d{2}:\d{2}:\d{2}\.\d{3})/o) { return unless ! exists $opts{'start-time'} || $opts{'start +-time'} lt $1; return unless ! exists $opts{'stop-time'} || $opts{'stop-t +ime'} gt $1; } } warn $line if $opts{'verbose'} > 3; return $line; }
In reply to How to improve speed of reading big files by korlaz
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |