Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re: Processing large files many times over

by maverick (Curate)
on Jun 24, 2002 at 18:43 UTC ( [id://176886]=note: print w/replies, xml ) Need Help??


in reply to Processing large files many times over

Given this little snippet, it appears that the goal of the first loop is to remove all the lines that don't end with 0 and put those into an array. The next loop then reads that array. So, you're iterating over your data twice, but you don't need to. You've got 1 completely copy of the file in memory, and another mostly completely copy. It takes time to create and destroy those copies. You don't need to do that either.
open(IN,$my_file) || die "Can't open: $!"; while (my $line = <IN>) { next unless $line =~ /0\s*$/; $line =~ /^(\d+.?\d*)//; if ($1 > .5) { # etc, etc } }
here's the break down line by line
  • Open the file
  • Process each line in turn. So, we're not storing the whole thing in memory (takes time to allocate said memory)
  • The if $1 == 0 part of your first loop only kept lines that ended in 0 for @zeroat. Just test directly for the line ending in 0 and throw away the rest.
  • no need for the regexp to catch the last character, we already know it's a 0
  • Test only the > .5 part, again no need for the $2 == 0 since we already threw away every line that didn't end in 0.
  • Rest of processing
HTH

/\/\averick
OmG! They killed tilly! You *bleep*!!

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://176886]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (3)
As of 2024-04-18 23:04 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found