Dear Monks,
I have a 5GB file that has identifiers lines followed by very long data lines (single lines in both cases). In a loop I get coordinates which tell me what identifier I need and what part of the corresponding data I need to extract and modify. The problem I have is that this loop goes through >1000 repetitions and reading the file each time is a dump idea. I was thinking about putting it into a hash but not sure about memory limitations. Any idea on how to tackle it? Speed is really an important factor. Maybe do a system call with qx and do a linux grep command? I have to get away from the computer for couple of hours so thanks in advance!
In reply to Reading HUGE file multiple times by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |