What's happening to you is a phenomenon known as thrashing. Virtual memory page faults and the corresponding page-I/O is eating you alive. This always happens abruptly: "hitting the wall." Performance goes straight from a linear slow-down to an exponential one. And the only thing that you can do about it is either to buy more RAM or to redesign your algorithm.
Hash tables are purposely designed to widely-distribute their memory addresses and this works just fine as long as "memory is free." But when thrashing begins to occur it makes it worse: almost every single hash "hit" might cause a page fault.
One good alternative strategy would be to select the URLs out of the file and write them to an external file, which you then sort using an on-disk sorting command. Then, read the sorted file: each occurrence of the same name will now be consecutive – simply note when the value changes. This strategy places an explicit disk-file in place of the virtual-memory disk backing-store, and it will produce predictable performance regardless of data volume. It's a strategy that's as old as punched cards, and it still works.
In reply to Re: Hash Search is VERY slow
by Anonymous Monk
in thread Hash Search is VERY slow
by rtjensen
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |