Use a database, which does the IO intelligently for you.
| [reply] |
If I understand your question correct, then you are scanning a file that is continually updated/growing - like a log file. One approach to reduce CPU- and I/O-load is to open the file once and then scan only the lines that were appended. Maybe File::Tail might suit your
intention?
| [reply] |
..actually, the file I'm scanning is static (one that I created with Perl code) but contains 14K lines!
| [reply] |
14k lines of normal line sizes of maybe 100 chars isn't that bad. Consider reading the file into something like $hash{userid}.
which Keszler already said while I as usual did forget to check for new replies before create
Then, if you find the hash build time being significant wrt total runtime, while your 14k file is fairly static: consider a lightweight database like sqlite accessed via DBI.
If you've many independent script or parallel script runs, keeping the hash in memory in a separate process might also be worthwile (old-style client server or some shared memory setup - but that's somewhat like a coding challenge looking for a problem).
cu & HTH, Peter -- hints may be untested unless stated otherwise; use with caution & understanding.
| [reply] [d/l] |
| [reply] |