in reply to Re: Efficiently parsing a large file
in thread Efficiently parsing a large file

If the file is really huge (several hundred MB) it is important
• to read the file only once
• to keep memory allocation small
I would therefore suggest to delete the hash item in case of "complete"
delete ($state{$1});
Looping through the hash having read the whole file would then show just the non completed cases.

pelagic

Replies are listed 'Best First'.
Re: Re: Re: Efficiently parsing a large file
by jfroebe (Parson) on Apr 08, 2004 at 21:15 UTC

    Hi Pelagic,

    Unless I'm mistaken, I believe he wants to keep complete items.. For each line containing serial number and begin I have to find the matching, doing-work and complete entries. If the matching entries do not exist report it., then again, I might be wrong ;-)

    Jason L. Froebe

    No one has seen what you have seen, and until that happens, we're all going to think that you're nuts. - Jack O'Neil, Stargate SG-1

      Neil will be telling us ...

      pelagic