Re: reading multiple files one at a time
by voyager (Friar) on Jul 18, 2001 at 02:36 UTC
|
Your code will read only the first line from each file, since you do a "last;" after reading one line.
Get rid of the last and you'll read all the lines in each file. Or is it that you want to read the first line from each file, then the second, etc.? | [reply] |
|
|
Hi,
I know this is an old thread but this is what I am trying to do.
I have three files and I want to merge them into one file a single line at a time.
At the moment I have from the top post:
@filenames = qw/English Kana Kanji/;
foreach my $file (@filenames) {
open FILE, "<$file";
while(chomp($line = <FILE>)) {
print "$line ";
last;
}
close FILE;
}
Output:
Mr Tanaka たなかさん 田中さん
However when I take out the 'last' I then get everything from my first file, then the second file then the third file.
How do I do it so that I can merge the lines of each file. And then print the lines combined as in the above output?
Many thanks,
Peter. | [reply] [d/l] |
|
|
Updated: Corrected. Replaced the untested attempt to mitigate for warning produced when one or more of the input files is shoter than the others.
#! perl -slw
use strict;
my @files = @ARGV;
my @fhs;
my $i = 0;
open $fhs[ $i++ ], '<', $_ or die "$_ : $!" for @files;
#while( my @lines = map{ scalar <$_> || () } @fhs ) { !!Bad code!!
while( ## Build an array of one line (or the null string) from each fi
+le.
my @lines = map{
defined( my $line =<$_> )
? $line
: ''
} @fhs
) {
chomp @lines; ## Remove the newlines
print join ' ', @lines; ## and concatenate them.
}
close $_ for @fhs;
__END__
P:\test>465737 test1.dat test2.dat test3.dat
file1 line 1 file2 line 1 file3 line 1
file1 line 2 file2 line 2 file3 line 2
file1 line 3 file2 line 3 file3 line 3
file1 line 4 file2 line 4 file3 line 4
...
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
| [reply] [d/l] |
|
|
|
|
|
|
Unix systems already have a standard command line utility to do this. It's called "paste":
paste file1 file2 file3 > merged_file
And like all good unix tools, of course, it has been ported to windows (by cygwin, ATT Research Labs, GNU, and others).
But using perl for this (as demonstrated by BrowserUK) is still fun and rewarding in its own right -- e.g. in case you need to do character encoding conversions while merging files, or manage column widths or modify column contents in intelligent ways. (You could use "paste" in combination with other standard unix tools to do these things, but at that point, it's not so different from just writing a little perl script.) | [reply] [d/l] |
Re: reading multiple files one at a time
by Cubes (Pilgrim) on Jul 18, 2001 at 05:44 UTC
|
I don't see anything wrong with your original version (and, in fact, it runs as expected on
my box).
I see a couple opportunities for things to get fouled up:
1. Where does @filenames come from? Are you sure
it's not empty?
2. Your original doesn't check whether or not the open() succeeds, so it might just have been silently failing to open anything in @filenames.
If your original version is still failing, try printing out what's in @filenames and/or $file each time through; and do add the 'or warn $!' after your open(). The infamous 'use strict' and
-w might also help you out if the problem turns out to be something simple, like variable name typos. | [reply] |
Re: reading multiple files one at a time
by jumpstop (Initiate) on Jul 18, 2001 at 02:41 UTC
|
I only included the last to print only the first line so I could sort through the output easier. OeufMayo sent me this answer that works well, although I'm not sure of the difference. Maybe something to do with $_? If someone can explain, I will listen!
foreach (@filenames){
open F, $_ or warn $!;
while(<F> ){
print "$_\n";
last;
}
}
Thanks again OeufMayo! | [reply] [d/l] |
Re: reading multiple files one at a time
by petral (Curate) on Jul 18, 2001 at 09:41 UTC
|
Strange but true: You don't have to close the files as the next open F will close it. BUT: if you want $. to reflect the line numbers of _each_ file (that is start over at 1 on the first line of each file), then you do have to close it. | [reply] [d/l] [select] |
|
|
Quite some time ago, writting into several diffrent files from a function which opened each time, i had a problem were data would not fluch correctly. My only guess at the time was that the implicit close on my version of Perl was not flushing to disk. Not sure this is the correct cause, but adding the single close line fixed it, so i always explicitly close filehandles ... just a warning.
OH, a sarcasm detector, that’s really useful
| [reply] [d/l] [select] |
|
|
The 'extra' close allows for error checking as well, that's why it's a good idea to keep it - of course only if you really check for errors ;)
close FILE or die "Couldn't close file: $!";
If you nevertheless want the continous counting of lines a la $. it shouldn't be a problem to do that yourself with something like $linecount++
-- Hofmator
| [reply] [d/l] [select] |