Convert the log into a sortable format, resolving domain names if necessary.
use Socket qw( inet_aton ); while (<>) { my $host = extract_host($_); print(unpack('H8', inet_aton($host)), $_); }
Sort the modified log using unix command line util sort.
Count the dups in the sorted file
my $last; my $count; while (<>) { my $ip = extract_ip($_); if (defined($last) && $last ne $ip) { print("$last: $count\n"); $count = 0; } $last = $ip; ++$count; } if (defined($last)) { print("$last: $count\n"); }
The Perl bits use O(1) memory. Unix command line util sort will efficiently use the disk to sort when necessary.
Bonus: You are given the log entries that correspond to each ip address at basically no cost. If you don't need that, you can leave out that information from the file to be sorted.
In reply to Re: Working with large amount of data
by ikegami
in thread Working with large amount of data
by just1fix
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |