And, finally, the big one, the database will be significantly faster and better optimised for swapping intermediate values to disk and back than the kernel is. Leaving this to the kernel is an option, and it works, as you've found out, but what the kernel swaps to disk may not be exactly what you want swapped to disk. A database will have an entire memory management unit for exactly this purpose, where it reads what it needs from disk (whether the tables or the indexes), discards stuff, and saves intermediate information back to temporary space if necessary. Most likely, though, it won't do this, but will read all of the data possibly multiple times from disk, and discard anything it's not using at the current moment in time. This sounds slow, but the reality is that the original code is also doing this, just moreso. Thee kernel is the code that is writing stuff to disk, whereas a database would likely guess correctly more often. Continued from here, though, is that if you have the right indexes on load, the data that the database needs to load may not actually be any of the data from the actual table, it may only be the indexes (which means loading less data per pass, and only the relevant data), and that would allow it to load more at a time, and to possibly not even load any piece of data twice. Maybe. But, with the right query, the database can figure this out.
If the output data starts to get too big, the database can even persist it to disk to be further manipulated, in a specialised structure (that is, a database table), that is amenable to continued modifications as per all of the above caching and discarding and what have you, until it is done producing the output, and then it can return that data more or less straight from disk.
Your basic problem is that you've long passed by your system's memory, and you're hitting kernel swapping. Other than that, the rest of your supposition is probably entirely correct. Dealing with this problem is non-trivial, and is one of the things that actual databases are good at.
Back a couple jobs ago, when I worked for one of the big-three commercial databases, I had access to systems with 256GB RAM. If I were to need to do what you are doing back then on those machines, yeah, perl in-memory likely would have sufficed, and been faster than using the commercial db I had access to (as you rightfully point out, my solution comes with some overhead). But we all have to work within the constraints given to us, and if your system has a "paltry" 16GB RAM, that's your constraint, and you have to find the algorithm that takes that into account. There are such out there, and they've generally been implemented by the database systems, so there's no need to reinvent that wheel, just re-use it.
Also, when your management likes the output you just produced, they're going to ask for more and more analytics. You just know it's going to happen. Throw-away code is very rarely thrown away. And then re-querying the database for more stuff is going to be entirely trivial.
In reply to Re^3: efficient perl code to count, rank
by Tanktalus
in thread efficient perl code to count, rank
by Perl_Noob2021
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |