in reply to Re: Optimising a search of several thousand files
in thread Optimising a search of several thousand files
Which produced:#!/usr/bin/perl -wl use strict; my $dir = qw(/home/idlerpg/graphdump); opendir(DIR, $dir) or die "Cannot open $dir:$!"; my @files = reverse sort readdir(DIR); my $currlevel = 68; my $numfiles; my $totalfiles = scalar @files; FILE: for my $file (@files) { open IN, '<', "$dir/$file" or die "Ack!:$!"; $numfiles++; undef $/; my $data = <IN>; my $pos = index($data,'McDarren'); $/ = "\n"; next FILE if $pos == -1; seek(IN, $pos,0); chomp(my $line = <IN>); my ($user,$level) = (split /\t/, $line)[0,3]; next FILE if $level <= $currlevel; print "$file $user $level"; print "Processed $numfiles files (total files:$totalfiles)"; exit; }
A slight improvement, but not a great deal. But I had to perldoc for index and seek as I've not used either function before, so my implementation may be a bit wonky ;)$ time ./gfather.pl dump.1167332700 McDarren 71 Processed 9054 files (total files:57868) 39.15 real 0.84 user 0.75 sys
However, it gave me another approach to take, which was really the whole point of my post in the first place.
(Note that the number of files processed is slightly more, as the dumps continue to accumulate every 5 mins)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Optimising a search of several thousand files
by GrandFather (Saint) on Jan 29, 2007 at 08:50 UTC | |
by McDarren (Abbot) on Jan 29, 2007 at 09:33 UTC | |
by glasswalk3r (Friar) on Jan 29, 2007 at 13:05 UTC |