I would like to find a way to do it quicker, possibly using "grep" and slurping the file into memory, but since the file is a over 200MB, I'm not too optimistic. Thanks for your help. Linux, Perl, Apache, Stronghold, Unix jhorner@knoxlug.org http://www.knoxlug.org#!/usr/bin/perl -w # # error-report.pl # # usage: # error-report.pl <dir> <error> # where: # <dir> is the directory of the application # <error> is the number of the error code # # requires a filtered copy of a log file to exist. # # v0.1, jh8@ornl.gov, 5/12/2000 # use strict; # my $file = shift; my $dir = shift; my $error = shift; $error = " ".$error." "; open (LOG, "/usr/local/apache/logs/access_log") || die "Can't open log +file: $!"; my (@entries, @log, %report, @list); while (<LOG>) { my $url = (split())[6]; if ($url =~ /^\/$dir\// && /$error/) { if (exists $report{$url}) { $report{$url}++; } else { $report{$url} = 1; } } } close LOG; @list = sort {$report{$b} <=> $report{$a}} keys %report; foreach (@list) { print "$_: $report{$_}\n"; }
In reply to Re: grepping for
by jjhorner
in thread grepping for
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |