in reply to external file reference

If I understand your question correct, then you are scanning a file that is continually updated/growing - like a log file. One approach to reduce CPU- and I/O-load is to open the file once and then scan only the lines that were appended. Maybe File::Tail might suit your intention?

Replies are listed 'Best First'.
Re^2: external file reference
by Spooky (Beadle) on Oct 28, 2009 at 17:06 UTC
    ..actually, the file I'm scanning is static (one that I created with Perl code) but contains 14K lines!

      14k lines of normal line sizes of maybe 100 chars isn't that bad. Consider reading the file into something like $hash{userid}.

      which Keszler already said while I as usual did forget to check for new replies before create

      Then, if you find the hash build time being significant wrt total runtime, while your 14k file is fairly static: consider a lightweight database like sqlite accessed via DBI.

      If you've many independent script or parallel script runs, keeping the hash in memory in a separate process might also be worthwile (old-style client server or some shared memory setup - but that's somewhat like a coding challenge looking for a problem).

      cu & HTH, Peter -- hints may be untested unless stated otherwise; use with caution & understanding.
        OK, ..lets say I do something like this: $hash{$userid} where $userid is a character string. How would I populate this with data elements that would be both numeric and alpha so that when I key in on say "ewh1234" I could pull all or a specific element within the array(?) - e.g., "ewh1234" may contain 'engineer, 23.4, 1.2, local' (4 data items within "ewh1234")? ..thanks
      So, taking a worst-case scenario of a 15,000 line file averaging 200 bytes/line, that's 3MB. Store it all in a hash structure and there'll be some overhead, but even if the program took 4MB RAM total that's not very significant on today's boxes.

      OTOH, if you're running this on a PII-266 w/ 64MB RAM it'd be a bit much.