In this specific example my use case simplifies to this:
-File 'A' with unique ID's
-File 'B' is a list of possibly repeating ID's.
The output needs to be a count of how many times each ID in file 'A' occurs in file 'B'.
So to do this I just create a hash with keys being all the ID's from 'A', and value being a count of how many times it is seen.
File B is then streamed through to update the counters of the hash.
Now this works for me because the memory size, and my number of items, just happens to be a reasonable size for computers these days, falling somewhere between 4GB and 16GB as peak memory used.
I would be very interested in ideas people have for how to achieve this on a local PC without using much RAM, while also not just using disk in place of RAM. One way is to obviously load a subset of keys from A, then stream the file B to update counts for just those keys, and save the results. Then load the next lot of keys, and re-stream the entire file B, repeat until done. This is probably what I would do if I ran out of RAM. But while I have the RAM, I want to use it.
If both files were sorted, I could stream both at the same time, as I would know which ID's I pass. But in my case File 'A's sequence of ID's is important. So if I sorted A, I would have to reconstruct its order, and I don't see how sorting would be done without using just as much RAM.
|