GertMT has asked for the wisdom of the Perl Monks concerning the following question:

I have a table and my goal is to do a unix style:
grep 'pattern' file.txt >tofile.txt
I would like the pattern to depend on the unique values in the 1 column. Final goal is to get a print to various files named after the unique values. I haven't been able to figure out how. It my first attempt in Perl.
Thanks for any feedback,
Gert
use strict; use diagnostics; use warnings; use vars qw! $file $col @F $val $count @order %count $i !; my $file = $ARGV[0]; open( INPUT, $file ) || die "does not work: $!"; $col = 0; while (<INPUT>) { s/\r?\n//; @F = split /,/, $_; $val = $F[$col]; if ( !exists $count{$val} ) { push @order, $val } $count{$val}++; } foreach $val (@order) { print "$val\n" } close(INPUT); open( INPUT, $file ) || die "does not work: $!"; open( MYFILE, ">printhere.txt" ) or die "does not work $!"; while (<INPUT>) { { for ( $i = 0 ; $i < scalar @order ; $i++ ) { if (m/^$order[$i]/) { print MYFILE } } } } close(INPUT); close(MYFILE);

Replies are listed 'Best First'.
Re: Split up file depending unique values 1st column
by jwkrahn (Abbot) on Dec 16, 2006 at 23:36 UTC
    It sounds like you want something as simple as:
    use warnings; use strict; my $file = shift; open my $IN, '<', $file or die "Cannot open '$file' $!"; while ( <$IN> ) { my ( $field ) = /^([^,]+)/ or die "Error: field not found.\n"; open my $OUT, '>>', $field or die "Cannot open '$field' $!"; print $OUT $_; } close $IN;
      This works really well! I can't say I didn't expect shorter code. Though it is shorter I'll definitely need to take some time to try to figure out what's going on.
      Thanks again, Gert
        If you really want shorter code then:
        use warnings; use strict; /^([^,]+)/ && open F, ">>$1" and print F while <>; __END__
        :-) The way it works is that it opens a file in append mode for every line that matches the regular expression using the first field as the file name and prints the current line to the end of that file.

Re: Split up file depending unique values 1st column
by graff (Chancellor) on Dec 16, 2006 at 23:20 UTC
    The best/easiest solution would depend on how many distinct values are possible in the first column of the file, which determines how many output files you're going to create.

    If there are fewer than a couple dozen, then you can just open that many output files and print each line to one of those files as you read through the input.

    If there are lots of different values/output files, then it would be better to sort the input file first, so that distinct values in the first column are clumped together, and you can open and close output files as you go through the input, and you only need one output file open at any given time.

    Since the latter approach works equally well for all cases, that's the one I'd rather go with. It assumes that you have a decent utility to sort the input before feeding it to your perl script (e.g. unix/GNU "sort"):

    use strict; use warnings; open( INPUT, "sort $ARGV[0] |" ) or die "can't sort $ARGV[0]: $!"; my $outname = ""; while (<INPUT>) { if ( /^(.+?),/ ) { my $newname = $1; if ( $newname ne $outname ) { close OUT if ( $outname ); open( OUT, ">$newname" ) or die "can't output to $newname: +$!"; $outname = $newname; } print OUT; } else { warn "Sorted input from $ARGV[0] had unusable data at line $.: + $_\n" } } close OUT;
    (not tested; updated to remove the unnecessary $col variable, and to use ".+" instead of ".*" in the regex match to capture the first-column/file-name string -- don't want an empty string there.)
      This does work. The number of output files is always less then 10. I'll have to work on the sort as I get a warning
      Sorted input from file.txt had unusable data at line 1:
      Though it works and I think I'll be able to understand what's going on. Many thanks,
      Gert