in reply to Working with large amount of data
Convert the log into a sortable format, resolving domain names if necessary.
use Socket qw( inet_aton ); while (<>) { my $host = extract_host($_); print(unpack('H8', inet_aton($host)), $_); }
Sort the modified log using unix command line util sort.
Count the dups in the sorted file
my $last; my $count; while (<>) { my $ip = extract_ip($_); if (defined($last) && $last ne $ip) { print("$last: $count\n"); $count = 0; } $last = $ip; ++$count; } if (defined($last)) { print("$last: $count\n"); }
The Perl bits use O(1) memory. Unix command line util sort will efficiently use the disk to sort when necessary.
Bonus: You are given the log entries that correspond to each ip address at basically no cost. If you don't need that, you can leave out that information from the file to be sorted.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Working with large amount of data
by BrowserUk (Patriarch) on Sep 22, 2009 at 17:44 UTC |