Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re: Improving the efficiency of code when processed against large amount of data

by zer (Deacon)
on Nov 09, 2006 at 05:52 UTC ( [id://583046]=note: print w/replies, xml ) Need Help??


in reply to Improving the efficiency of code when processed against large amount of data

another thing that i found sped my code up for larger files was to avoid slurping and go line by line
foreach (<FH>){ print $_; }
This should work.
  • Comment on Re: Improving the efficiency of code when processed against large amount of data
  • Download Code

Replies are listed 'Best First'.
Re^2: Improving the efficiency of code when processed against large amount of data
by chromatic (Archbishop) on Nov 09, 2006 at 08:16 UTC

    Only while loops avoid slurping though! for loops slurp.

Re^2: Improving the efficiency of code when processed against large amount of data
by aufflick (Deacon) on Nov 09, 2006 at 06:00 UTC
    Slurping all the files before processing means they are all in ram. If you are light on ram that means you're swapping the data in, then out, then back into memory again.

    Line by line (or at least file by file) is usually the way to go for large datasets.

      good explanation thanks!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://583046]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others pondering the Monastery: (8)
As of 2024-04-18 08:36 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found