in reply to Re: Working with large amount of data
in thread Working with large amount of data

I meant I need to count quantity of every unique IP in a log. But thank you, I think I should look in the direction of tying hash to DB files.
  • Comment on Re^2: Working with large amount of data

Replies are listed 'Best First'.
Re^3: Working with large amount of data
by salva (Canon) on Sep 20, 2009 at 21:47 UTC
    Actually, using an in disk database will be quite inefficient!

    Every entry will require, at minimum, 4 bytes to store the IP address plus 4 bytes for the counter, so 8 bytes for 1 billion entries results in 8GB. Statistically, that means that for a completely random access, and with 1GB of memory cache, you have >85% of misses... and we are talking of an ideal scenario, in practice it will probably be one or two orders of magnitude worse!!!

    Another (IMO better) approach:

    1. Divide the IPs in ranges that can fit in the available memory. For instance, in 512MB, you can fit 128M counters, so divide the 2**32 IP address space in 32 ranges (32 * 128M = 2**32).
    2. Read the log file and save the IPs as you find them in one of 32 files associated to the ranges.
    3. Process the files using a packed array (check Tie::Array::Packed or use vec) to count the IP ocurrences.