in reply to Working with large amount of data

Use a hash tied to a database and keep a running count when new keys are added.

Or perhaps convert IPs to integers and use a bit vector to keep track of unique IPs.

Replies are listed 'Best First'.
Re^2: Working with large amount of data
by just1fix (Novice) on Sep 20, 2009 at 19:40 UTC
    I meant I need to count quantity of every unique IP in a log. But thank you, I think I should look in the direction of tying hash to DB files.
      Actually, using an in disk database will be quite inefficient!

      Every entry will require, at minimum, 4 bytes to store the IP address plus 4 bytes for the counter, so 8 bytes for 1 billion entries results in 8GB. Statistically, that means that for a completely random access, and with 1GB of memory cache, you have >85% of misses... and we are talking of an ideal scenario, in practice it will probably be one or two orders of magnitude worse!!!

      Another (IMO better) approach:

      1. Divide the IPs in ranges that can fit in the available memory. For instance, in 512MB, you can fit 128M counters, so divide the 2**32 IP address space in 32 ranges (32 * 128M = 2**32).
      2. Read the log file and save the IPs as you find them in one of 32 files associated to the ranges.
      3. Process the files using a packed array (check Tie::Array::Packed or use vec) to count the IP ocurrences.