in reply to Re: Parsing large files
in thread Parsing large files

That is sort of what I suspected, but I am wondering if it may be even more directly related to IO::Handle. I am not sure if IO::Handle has any size limitations or if it just works on the filehandle without having to worry about space etc.

I like your suggestion about changing the open() statement, but before I settle on that I want to be absolutely sure that is what is going on. This process takes about 22hours to complete, so obviously if I am wrong, it will be a costly (timewise) mistake.

Thanks!!

Replies are listed 'Best First'.
Re^3: Parsing large files
by Joost (Canon) on Apr 10, 2005 at 21:07 UTC
Re^3: Parsing large files
by tilly (Archbishop) on Apr 10, 2005 at 21:16 UTC
    IO::Handle just works on the filehandle.

    If performance is an issue, though, note that there is some overhead (or at least was at one point, it may have improved since I checked) in using IO::Handle's OO support. So it may be faster to use <> directly.

    Additionally you might want to avoid using a threaded Perl (those are slower even if you don't use threads), and on some platforms it can be faster to call read and then split the lines yourself than it is to let it be done with <>. On others the built-in is faster, and I believe that with a current Perl the performance problem behind that should be eliminated everywhere.