in reply to Grep Effeciency

You are looping over @ECL_STAT 2*1200 times! Loop over it once, storing the data in a structure that is efficient to look up by $NETID:

for( @ECL_STAT ) { my( $NETID )= split /\|/, $_; push @{$byID{$NETID}}, $_; }

Then you can do your calculations on that.

Update: Change 4 to 2. Thanks to arturo for the sharp eyes.

        - tye (but my friends call me "Tye")

Replies are listed 'Best First'.
Re: (tye)Re: Grep Effeciency
by ImpalaSS (Monk) on Feb 28, 2001 at 01:23 UTC
    Hey Tye, Exactly, thats the problem. The array is searched 1200 times. I like your idea of searching it just once and splitting it up. Here is a sample of the file (CEL_STAT is exactly the same, only the numbers after the netid are different:)
    The netid is first: 21352pa|02/27/2001|03:30|25|363|32526.4363|363|3627524|


    I shortened the line, each line contains about 70 fields of data. So, basically, there is the time (03:30 in this case) and the netid. The file, is searched for every NETID and it finds around 50 (48 to be exact). one set of data for every half hour. Then this data is split at the | and then calculated on. The only way i can think about making it faster is as it is searched, and the 50 or so lines are removed, to just delete them, thus making the main array smaller and the search will be a little faster. I like your idea from above, but i really have no idea what thats doing, and it confuses me.

    Thanks Again :)

    Dipul