in reply to PostgreSQL cursors with Perl.

At first blush, I think I'd say "go back to your plaintext files." Then, see if you can find a more-efficient way to manage them.

First of all, we know that all of this data is static. It's never going to change, since the quotes in question have already happened. Since we've already committed 15 gigabytes of disk-space to “storing it,” which it is already happily doing, our real objective is ... finding it.

We know that “putting it all in memory” is not an option, simply because we know that all of our “memory” is in-fact a disk file. And it's definitely not going to be a disk-file structure that's in any way conducive to what we want to do here. So... that's out.

Okay, so let's explore various alternatives and see which ones might hold some promise. We're not happy with approach #1, either with MySQL or an alternative, so what else might we do?

One idea that sounds very-appealing to me is to keep the individual files just as they are, and to build some kind of an index-structure alongside of them. For instance, a useful index might be... “which files contain quotes for ‘British Pounds Sterling,’ for ‘August 1997?’”

If the files are of reasonable size, it might be perfectly reasonable to open a bunch of them and scan through them to get our answer... once we've efficiently located the files.

It might also be quite reasonable to take the data files, re-sort them (using a disk-based sort), and exploit this “sorted” property in a good ol' binary search. We could even turn the flat files into Berkeley B-tree files.

A large collection of known-static files calls for different measures ... more like those used by a physical library than those used by an SQL database. A database might prove to be a useful part of the picture, but it might be best-applied in a different way.