Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hello Monks!

Why doesn't this script work? I am trying to open, read and process multiple files using a "for" loop. I know there are "dir" commands that I could use, but why doesn't this work?

Not a programmer. Just trying to learn some Perl to do research. Please excuse any transgressions and/or dumb mistakes.


for ($i=2; $i<4; $i +=1) { open (DAT$i, "T$iT.txt"); @A$i=<DAT$i>; print @A$i; }

Replies are listed 'Best First'.
Re: Using a loop to process multiple files
by moritz (Cardinal) on May 08, 2008 at 21:27 UTC
    The first mistake is open (DAT$i, "T$iT.txt");.

    You can't just use variables as part of other variables. And "T$iT.txt" searches for the variable $iT, and can't find it.

    You can do this instead:

    # always start your scripts with these two lines... use strict; use warnings; # and declare your variables foreach my $i (2 .. 4) { my $filename = 'T' . $i . 'T.txt'; open (my $file, $filename) or die "Can't open '$filename': $!"; my @a = <$file>; print @a; close $file; }
      Thanks.
Re: Using a loop to process multiple files
by pc88mxer (Vicar) on May 08, 2008 at 22:27 UTC
    Hi, I realize you're not a programmer, but I'm curious where you've picked up the use of <...> in array context as in:
    @A = <DAT>;
    It's not wrong, but I've seen it used quite a bit by new perl programmers, and I wonder why they prefer that to using the while (<DAT>) {...} construct. In fact, often I see <...> used in array context immediately followed by a foreach loop iterating over the array, and it makes me cringe.

    Like I said, it's not a wrong practice. It can be more wasteful of memory than a while loop, but on today's hardware that's probably not a big concern. Is it just conceptually easier to think in terms of "read the data into an array and then iterate over the array" as opposed to "read in a line and process it"? In most other programming languages the while loop approach is the only way to process a file a line at a time, so I wonder why I see this practice so much.

    Anyway, any insight into why you decided to use @A = <DAT>, where you've seen it or picked it up from would be helpful. Thanks!

      A number of the non-O'Reilly books use it. I did tech-editing for one and tried my darndest to get it out of there, but the author wouldn't budge.

      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
      Just to follow up...

      It's also easier for me to think of a (text) file as a single entity. Reading/Processing a file line by line is counterintuitive for me.

        That's an interesting comment, because the operating system doesn't think of a file as a collection of lines either. One of the motivations for making it easy to read files on a line-by-line or record-by-record basis in Perl was to match how people think, but that may rely on what people want to do with Perl more than any gestalt intuition about the nature of files.

      In fact, often I see <...> used in array context immediately followed by a foreach loop iterating over the array, and it makes me cringe.
      Or immediately preceded by map?
      open my $fh, q{<}, q{stop_words.txt} or die qq{cant open stop: $!\n}; my %stop = map{chomp; $_ => undef} <$fh>;
      In fact I have a trivial wrapper MyIO script where I can
      my @array = $io->read_array(q{smallish_file});
      and later
      $io->write_array(q{tweaked_file}, \@array);
      There are a couple for strings too (and for returning a hash). If it isn't a 'big' file, say an Apache access log, I never bother with file handles. They are always an intermediate step I could do without.

      About the only file handle I use regularly is <DATA> because it is handy when posting on PerlMonks. :-)

      I got it from "Perl for Dummies" -- if I remember correctly.

        If you have any desire to look at another book, I can't sing the praises of Learning Perl highly enough. Of course some people think this book requires a background in programming to really get.

        You can find the full text of Beginning Perl online. This book is aimed at non-programmers.

        Many people recommend Elements of Programming with Perl to beginners as well.

        Happy travels!


        TGI says moo

Re: Using a loop to process multiple files
by graff (Chancellor) on May 09, 2008 at 00:44 UTC
    If you wanted to have the set of files all open at the same time, you could store the file handles in an array or hash:
    my %fh; for my $i ( 2, 3 ) { my $fname = sprintf( "T%dT.txt", $i ); open( $fh{$fname}, "<", $fname ) or die "$fname: $!\n"; } # you now have multiple file handles indexed by file name in %fh
    But I gather that you don't really need to do that. If all you need is to read in file in turn, it's better to just have one open at a time. BTW, your comment about the strangeness of reading files line by line leads me to think you might like the following approach better:
    my %filedata; for my $i ( 2 .. 3 ) { my $fname = sprintf( "T%dT.txt", $i ); open( I, $fname ) or die "$fname: $!\n"; local $/; # temporarily sets input record separator to "undef" $filedata{$fname} = <I>; close I; } # you now have the full content of each file as a single string value, # indexed by file name in %filedata