Gives:FILE: for my $file (@files) { my ($stamp) = $file =~ /dump.(.*)/; next FILE if $stamp > ($now - $start); $numfiles++; $/ = "\nMcDarren\t"; open IN, '<', "$dir/$file" or die "Ack!:$!"; my $data = <IN>; $/ = "\n"; $data = <IN>; my ($level) = (split /\t/,$data)[2]; next FILE if $level <= $currlevel; print "$file - $level"; print "Processed $numfiles files (total files:$totalfiles)"; exit; }
Note that I needed to make it skip the most recent 2 days of files as the game was restored yesterday.$ time perl ysth.pl dump.1167332700 - 71 Processed 8764 files (total files:58154) 39.55 real 2.35 user 0.80 sys
Regards the xargs/egrep solution - that also works (with a bit of minor tweaking), but isn't any faster than any of the Perl solutions.
Thanks,
Darren :)
In reply to Re^2: Optimising a search of several thousand files
by McDarren
in thread Optimising a search of several thousand files
by McDarren
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |