in reply to Re: Filehandles and Arrays
in thread Filehandles and Arrays

virtualsue is right about why slurping large files is slow, but the above code will be slower for good reason.

As a general rule, the more detailed the instructions that Perl gets, the slower it will be. The reason is that Perl is interpreted, and so it is constantly going back to your instructions, figuring out what to do next, and then doing that. But the more that Perl is getting instructions that allow it to "chunk" operations, the easier it is for Perl to do that efficiently.

Think of yourself as perl and this becomes obvious. In the one case you are told to grab a hunk of data in lines, allocate an array, and shove the data there. In the other case you are told to open a file, scan in a line, alias that to $_, append to an array (do we need to allocate more for the array now?) etc.

Which instructions involve more thinking? For computers thought is time...

Replies are listed 'Best First'.
Re: Re (tilly) 2: Filehandles and Arrays
by virtualsue (Vicar) on May 10, 2001 at 02:33 UTC
    Thanks for your explanation. I definitely oversimplified above. In my defense, I did it because it bothered me that what I saw as the biggest opening for performance pain (file slurp) was being ignored. Having seen this sort of thing happen all too often in real life, I am possibly a little oversensitive. I get this image of a guy being rushed into an ER, blood spurting all over from some massive trauma, and telling the docs that he'd like them to look at his hangnail instead. ;)