in reply to Re: Re: Re: Obtaining Apache logfile stats?
in thread Obtaining Apache logfile stats?

split is a good idea.. would it be faster to use an array?
  • Comment on Re: Re: Re: Re: Obtaining Apache logfile stats?

Replies are listed 'Best First'.
Re: Re: Re: Re: Re: Obtaining Apache logfile stats?
by sauoq (Abbot) on Mar 26, 2004 at 00:41 UTC
    would it be faster to use an array?

    If you mean "faster to use an array instead of a hash for collecting the data", then no, it would not be faster. I would split each line into an array during processing though.

    The idea is to key the hash by the filenames. So, everytime you come across, for instance, "/some/dir/file.html", you increase a count and a sum. The code might look something like this (untested):

    while (<LOG>) { my @part = (split ' ', $_)[5,8]; $hash{$part[0]}->[0] ++; # increase the count. $hash{$part[0]}->[1] += $part[1]; # increase the sum. }
    Note that the values of the hash are arrayrefs in order to store both the count and the sum associated with each filename. After you've munged your logs into raw data, you'll traverse the hash you created and compute the stats you want. Something like (again, untested):
    for my $key (sort keys %hash) { my $avg = $hash{$key}->[1] / $hash{$key}->[0]; # sum/count. print "$key\t$avg\n"; }

    -sauoq
    "My two cents aren't worth a dime.";
    
      ah ok, i think i get it. thanks sauoq!
        i was finally able to create a working script.. i created a loop to take STDIN parse the file from the top and split the output based on fields. worked pretty well