in reply to Re: Re: Re: Re: Obtaining Apache logfile stats?
in thread Obtaining Apache logfile stats?
would it be faster to use an array?
If you mean "faster to use an array instead of a hash for collecting the data", then no, it would not be faster. I would split each line into an array during processing though.
The idea is to key the hash by the filenames. So, everytime you come across, for instance, "/some/dir/file.html", you increase a count and a sum. The code might look something like this (untested):
Note that the values of the hash are arrayrefs in order to store both the count and the sum associated with each filename. After you've munged your logs into raw data, you'll traverse the hash you created and compute the stats you want. Something like (again, untested):while (<LOG>) { my @part = (split ' ', $_)[5,8]; $hash{$part[0]}->[0] ++; # increase the count. $hash{$part[0]}->[1] += $part[1]; # increase the sum. }
for my $key (sort keys %hash) { my $avg = $hash{$key}->[1] / $hash{$key}->[0]; # sum/count. print "$key\t$avg\n"; }
-sauoq "My two cents aren't worth a dime.";
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Re: Re: Re: Re: Obtaining Apache logfile stats?
by mvam (Acolyte) on Mar 26, 2004 at 16:54 UTC | |
by mvam (Acolyte) on Apr 01, 2004 at 23:33 UTC |