Doing a file open and file close for every line can get really expensive and time consuming if there happen to be thousands of lines of input.
Perl allows you to store file handles in a hash, so you can open a new file each time you see a new "hostname" string, and just re-use that handle whenever you see the same name again:
Of course, if there are lots of different host names in the input file (or if there is something really wrong and unexpected in the list file contents), the script would die when it tries to open too many file handles.# set $listfile to some constant, or to $ARGV[0] (and supply the file +name # as a command-line arg when you run the script) my %outfh; # hash to hold output file handles open ORIGFILE, $listfile or die "$listfile: $!"; while ( <ORIGFILE> ) { my ( $host, $data ) = split " ", $_, 2; if ( ! exists( $outfh{$host} )) { open( $outfh{$host}, ">", $host ) or die "$host: $!"; } print $outfh{$host} $data; } # perl will flush and close output files when done
In reply to Re^3: Seperating individual lines of a file
by graff
in thread Seperating individual lines of a file
by tgrossner
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |