Actually, using an in disk database will be quite inefficient!
Every entry will require, at minimum, 4 bytes to store the IP address plus 4 bytes for the counter, so 8 bytes for 1 billion entries results in 8GB. Statistically, that means that for a completely random access, and with 1GB of memory cache, you have >85% of misses... and we are talking of an ideal scenario, in practice it will probably be one or two orders of magnitude worse!!!
Another (IMO better) approach:
- Divide the IPs in ranges that can fit in the available memory. For instance, in 512MB, you can fit 128M counters, so divide the 2**32 IP address space in 32 ranges (32 * 128M = 2**32).
- Read the log file and save the IPs as you find them in one of 32 files associated to the ranges.
- Process the files using a packed array (check Tie::Array::Packed or use vec) to count the IP ocurrences.
| [reply] [d/l] |