in reply to Re: Avoid using local $/=undef?
in thread Avoid using local $/=undef?

Thanks for the information moritz. I suppose this doesn't all directly relate to $/. Forgive my ignorance. I recall reading somewhere that using local $/=undef meant telling perl to read the entire file at once. Perhaps that's true and also has nothing to do with my issue. My hunch must have been way off.

The second block of code is what I wrote, thinking it better practice to not read/process the entire file at once. Instead I wanted to store the file in @records and then process them one by one.

Is there any obvious reason that I wouldn't get the same results from either set of code? I know they obviously differ in the way they're processing the file. What I don't understand is why the output of mine is so far off the mark.

Any help would be greatly appreciated!

Replies are listed 'Best First'.
Re^3: Avoid using local $/=undef?
by chromatic (Archbishop) on Nov 13, 2009 at 01:16 UTC
    I recall reading somewhere that using local $/=undef meant telling perl to read the entire file at once.

    Almost. It controls what readline (or <$fh;>) considers a line. Your code uses sysread, so that's irrelevant.

Re^3: Avoid using local $/=undef?
by desemondo (Hermit) on Nov 13, 2009 at 03:19 UTC
    "The second block of code is what I wrote, thinking it better practice to not read/process the entire file at once."

    That is usually correct, although there are sometimes (in my experience very rare) occassions where reading an entire file into memory allows better processing / data-munging / manipulation, etc... perhaps to avoid having to seek forwards/backwards through a file and simply just shove it into memory.

    (untested:) I don't think there is any real significant performance gain/tradeoff with either solution, unless you need to read non-contiguous chunks of data repeatedly/excessively - then the all-in-memory option will be better.

    But if you're only ever interested in the current "line" or "$/" chunk of data, then the line-by-line processing method will likely only ever need a couple of MB of memory, whereas the entire file in memory will require space proportionate to the size of the file, and limit you to the amount of RAM you've got available to use.