in reply to Working with large amount of data

Three steps:
  1. Convert the log into a sortable format, resolving domain names if necessary.

    use Socket qw( inet_aton ); while (<>) { my $host = extract_host($_); print(unpack('H8', inet_aton($host)), $_); }
  2. Sort the modified log using unix command line util sort.

  3. Count the dups in the sorted file

    my $last; my $count; while (<>) { my $ip = extract_ip($_); if (defined($last) && $last ne $ip) { print("$last: $count\n"); $count = 0; } $last = $ip; ++$count; } if (defined($last)) { print("$last: $count\n"); }

The Perl bits use O(1) memory. Unix command line util sort will efficiently use the disk to sort when necessary.

Bonus: You are given the log entries that correspond to each ip address at basically no cost. If you don't need that, you can leave out that information from the file to be sorted.

Replies are listed 'Best First'.
Re^2: Working with large amount of data
by BrowserUk (Patriarch) on Sep 22, 2009 at 17:44 UTC

    Switch $_ for tell ARGV in the print statement and you get not only what but where, save some disk space and speed up the sort. (Question:how long does it take to sort a billion lines?)


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.