You've probably already thought through this and know the answer, but just in case...
Is there no alternative design to the one that creates a large file which keeps getting larger, passing the several-GB mark and beyond? It might be more efficient, from a searching standpoint to divide the dataset into records and storing them in a relational database for easy searching capability.
If that's not a possibility, how about at least maintaining fixed-size records or entries in the data file, so that you can seek to specific records within the file quickly, without re-reading it constantly. You could even maintain a separate index file of where "matches" are known to exist.
Of course this is all just speculation, but it seems that if you're re-scanning this file at various intervals, and the file is growing to multi-GB sizes, eventually you'll either need to split it up, or cache the search results to maintain scalability.
Dave
In reply to Re: perl's ability to handle LARGE files
by davido
in thread perl's ability to handle LARGE files
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |