in reply to getfile( $filename )

c.f File::Slurp

use File::Slurp; my @lines = read_file( $filename );

Also see the article in that distribution on file slurping and efficiency. It sugggests reading the file in one shot and then splitting on newlines instead of reading line-by-line.

-xdg

Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.

Replies are listed 'Best First'.
Re^2: getfile( $filename )
by harleypig (Monk) on Jul 29, 2005 at 18:11 UTC

    I do a lot of work for clients who either don't have the ability (or the capability) to install from CPAN or just plain don't trust it (I have no idea why) and refuse to use CPAN modules. I have, on occasion, filed off the serial numbers and used a cpan module in order to get something done quickly but I *really* prefer not to do that.

    Also, if the file is too big to be read entirely into memory then reading the entire file in one shot isn't a good idea.

    Harley J Pig
      Also, if the file is too big to be read entirely into memory then reading the entire file in one shot isn't a good idea.
      Then when you file off the serial numbers, you can add an optional parameter for chunk size, and convert the function to an iterator. Make a module version and put it on CPAN for everyone else (unless there's already one there?)

      -QM
      --
      Quantum Mechanics: The dreams stuff is made of

        Well, File::Slurp purpose is to read the entire file, so that module itself is wrong for large files. The best I've come up with in that case is something along the lines of
        my $FH = IO::File->new ... until ( $FH->eof ) { my $chunk = $FH->read( ... ) # process $chunk }
        which doesn't really lend itself to modularization because the processing is always different.
        Harley J Pig