in reply to Re^2: Efficient way to handle huge number of records?
in thread Efficient way to handle huge number of records?
So if you need a db engine that is fast and reliable and can deal with lots of data you will want SQLite.
Now as far as the initial question goes, you can do something similar to what MySQL does. Yu could split the file into chunks and index chunks by the line numbers so that you know in which line does the header of you sequence appear. Once you did that you need to hash only those indexes. This will reduce the search the number of times prop. to the number of fragments you have after chomping your initial
|
|---|