reedx032 has asked for the wisdom of the Perl Monks concerning the following question:

When using the diamond operator to iterate over all the files in @ARGV, I'd like to be able to send the output from each file to a separate place rather than all together. Is the following code the only way to do this? It seems that there would be a way around checking to see if the filename in $ARGV has changed on every single line of input? Thanks,

Dana

#!/usr/bin/perl -w use strict; my $oldarg; while (<>){ if ($ARGV ne $oldarg){ my $outputfile="out".$ARGV; open OUTPUT,">", $outputfile; $oldarg=$ARGV; } print OUTPUT "$_"; }

Replies are listed 'Best First'.
Re: Changing the output filehandle while iterating input files.
by Perlbotics (Archbishop) on Sep 21, 2008 at 08:21 UTC

    For short scripts, your example is equivalent to

    perl -p -i'out*' file1 file2 file3
    which copies file* to outfile* (see perlrun). Use the -e switch to add further processing.

    In bigger programs eof() (note the missing parantheses) indicates a change of $ARGV. You can use that to get rid of handling $oldarg.

    Update: Oops, corrected after ysth's comment. Thanks.
    Update2: Clarification: The use of the -i (in-place edit) switch produces a result that is equivalent to the given (OP) example only. If you add further processing (-e), the new output will be found in the files given at the command line (file1, file2, ...) but not in outfile1, outfile2, etc. since they are the backups of the original files. So, using 'out*' for the backup filename is generally a bad idea (ill. semantics). I should have made this more clear in the original answer.

      In your example, "out*" will be the original file, while in the OP's code, "out*" will be the new file. Otherwise they are equivalent.

      Update: And please note that errors are not checked in either case. Imho a big problem with the "-p" switch is that it does not check whether writing and closing is successful.

Re: Changing the output filehandle while iterating input files.
by moritz (Cardinal) on Sep 21, 2008 at 08:27 UTC
    You can always explicitly iterate over @ARGV:
    my @files = @ARGV; for my $fn (@files) { local @ARGV = ($fn); open OUTPUT, '>', "out$fn" or die $!; while (<>) { print OUTPUT $_; } }

    That won't work for empty @ARGV, which you'd have to special-case.

Re: Changing the output filehandle while iterating input files.
by ambrus (Abbot) on Sep 21, 2008 at 08:57 UTC

    See the docs for eof in perlfunc. It gives a recipe on how you can use eof() to detect the end of each file.

Re: Changing the output filehandle while iterating input files.
by pjotrik (Friar) on Sep 21, 2008 at 13:55 UTC
Re: Changing the output filehandle while iterating input files.
by jwkrahn (Abbot) on Sep 21, 2008 at 11:57 UTC

    It looks like you want something like this:

    #!/usr/bin/perl use warnings; use strict; while ( <> ) { if ( $. == 1 ) { open OUTPUT '>', "out$ARGV" or die "Cannot open out$ARGV: $!"; } print OUTPUT; close ARGV if eof; }
Re: Changing the output filehandle while iterating input files.
by Anonymous Monk on Sep 21, 2008 at 08:31 UTC
    It seems that there would be a way around checking to see if the filename in $ARGV has changed on every single line of input? Thanks,

    There is. Don't use <>, use open/close and you won't have to check.