in reply to Optimising a search of several thousand files
Try running this:
#!/usr/bin/perl -wl use strict; my $dir = qw(/home/idlerpg/graphdump); opendir( DIR, $dir ) or die "Cannot open $dir:$!"; my @files = reverse sort readdir(DIR); my $currlevel = 68; my $numfiles; my $numlines; my $totalfiles = scalar @files; my $user; my $level; for my $file (@files) { open IN, '<', "$dir/$file" or die "Ack!: $!"; $numfiles++; while (<IN>) { $user = ''; $level = ''; $numlines++; chomp(); ( $user, $level ) = ( split /\t/o )[ 0, 3 ]; next if $user ne 'McDarren'; last if $level <= $currlevel; print $file, $user, $level; } close(IN); } print "Processed $numlines lines in $numfiles files (total files: $tot +alfiles)";
Of course, I didn't test it, so I'm not sure it will work as you expect.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Optimising a search of several thousand files
by McDarren (Abbot) on Jan 30, 2007 at 05:08 UTC | |
by glasswalk3r (Friar) on Jan 31, 2007 at 12:06 UTC |