in reply to Re: Optimising a search of several thousand files
in thread Optimising a search of several thousand files

I gave that a go.
FILE: for my $file (@files) { my ($stamp) = $file =~ /dump.(.*)/; next FILE if $stamp > ($now - $start); $numfiles++; $/ = "\nMcDarren\t"; open IN, '<', "$dir/$file" or die "Ack!:$!"; my $data = <IN>; $/ = "\n"; $data = <IN>; my ($level) = (split /\t/,$data)[2]; next FILE if $level <= $currlevel; print "$file - $level"; print "Processed $numfiles files (total files:$totalfiles)"; exit; }
Gives:
$ time perl ysth.pl dump.1167332700 - 71 Processed 8764 files (total files:58154) 39.55 real 2.35 user 0.80 sys
Note that I needed to make it skip the most recent 2 days of files as the game was restored yesterday.

Regards the xargs/egrep solution - that also works (with a bit of minor tweaking), but isn't any faster than any of the Perl solutions.

Thanks,
Darren :)

Replies are listed 'Best First'.
Re^3: Optimising a search of several thousand files
by ysth (Canon) on Jan 30, 2007 at 05:48 UTC
    Regards the xargs/egrep solution - that also works (with a bit of minor tweaking), but isn't any faster than any of the Perl solutions.
    Except in coding time :)