in reply to Avoid using local $/=undef?

I don't see how this all relates to $/. That variable controls what a <FILEHANDLE> or readline(FILEHANDLE) considers a line ending, which you don't use.

Then you use chomp which also respects $/, but since $/ is undef, it doesn't do anything.

Perl 6 - links to (nearly) everything that is Perl 6.

Replies are listed 'Best First'.
Re^2: Avoid using local $/=undef?
by irDanR (Novice) on Nov 13, 2009 at 00:26 UTC

    Thanks for the information moritz. I suppose this doesn't all directly relate to $/. Forgive my ignorance. I recall reading somewhere that using local $/=undef meant telling perl to read the entire file at once. Perhaps that's true and also has nothing to do with my issue. My hunch must have been way off.

    The second block of code is what I wrote, thinking it better practice to not read/process the entire file at once. Instead I wanted to store the file in @records and then process them one by one.

    Is there any obvious reason that I wouldn't get the same results from either set of code? I know they obviously differ in the way they're processing the file. What I don't understand is why the output of mine is so far off the mark.

    Any help would be greatly appreciated!

      I recall reading somewhere that using local $/=undef meant telling perl to read the entire file at once.

      Almost. It controls what readline (or <$fh;>) considers a line. Your code uses sysread, so that's irrelevant.

      "The second block of code is what I wrote, thinking it better practice to not read/process the entire file at once."

      That is usually correct, although there are sometimes (in my experience very rare) occassions where reading an entire file into memory allows better processing / data-munging / manipulation, etc... perhaps to avoid having to seek forwards/backwards through a file and simply just shove it into memory.

      (untested:) I don't think there is any real significant performance gain/tradeoff with either solution, unless you need to read non-contiguous chunks of data repeatedly/excessively - then the all-in-memory option will be better.

      But if you're only ever interested in the current "line" or "$/" chunk of data, then the line-by-line processing method will likely only ever need a couple of MB of memory, whereas the entire file in memory will require space proportionate to the size of the file, and limit you to the amount of RAM you've got available to use.