Which produced:#!/usr/bin/perl -wl use strict; my $dir = qw(/home/idlerpg/graphdump); opendir(DIR, $dir) or die "Cannot open $dir:$!"; my @files = reverse sort readdir(DIR); my $currlevel = 68; my $numfiles; my $totalfiles = scalar @files; FILE: for my $file (@files) { open IN, '<', "$dir/$file" or die "Ack!:$!"; $numfiles++; undef $/; my $data = <IN>; my $pos = index($data,'McDarren'); $/ = "\n"; next FILE if $pos == -1; seek(IN, $pos,0); chomp(my $line = <IN>); my ($user,$level) = (split /\t/, $line)[0,3]; next FILE if $level <= $currlevel; print "$file $user $level"; print "Processed $numfiles files (total files:$totalfiles)"; exit; }
A slight improvement, but not a great deal. But I had to perldoc for index and seek as I've not used either function before, so my implementation may be a bit wonky ;)$ time ./gfather.pl dump.1167332700 McDarren 71 Processed 9054 files (total files:57868) 39.15 real 0.84 user 0.75 sys
However, it gave me another approach to take, which was really the whole point of my post in the first place.
(Note that the number of files processed is slightly more, as the dumps continue to accumulate every 5 mins)
In reply to Re^2: Optimising a search of several thousand files
by McDarren
in thread Optimising a search of several thousand files
by McDarren
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |