Step 1: pull out the header.
$_ = <IN>; chomp; my @column = split /\t/;
Step 2: read each line and convert to a hash wit hthe column names as keys:
my @data; while(<IN>) { chomp; my %row; @row{@column} = split /\t/; push @data, \%row; }
That's it: the whole file is read into @data as an array of hashes. I think you probably need more code when using Text::CSV_xs.
As for your final request, the filtering: it depends on whether you want to use the same source for something else as well, and whether the data is huge (pretty meaningless nowadays, as several MB of data is now considered "small"), you can either filter from @data using grep, or test before pushing the current row onto @data.
Assuming the condition can be written as:
you can do:$row{'sex'} eq 'F' and $row{'body mass index'} > 40 and $row{blood pre +ssure'} > 135
orpush @data, \%row if $row{'sex'} eq 'F' and $row{'body mass index'} > +40 and $row{blood pressure'} > 135;
Note that for the latter a row is a hash ref in $_, while in the former, it's a plain hash in %row.@filtered = grep { $_->{'sex'} eq 'F' and $_->{'body mass index'} > 4 +0 and $_->{blood pressure'} > 135 } @data;
Perl is one of the very few languages that makes a distinction between the two in syntax, and although it has its advantages (flattening lists is very easy in Perl), the different syntax in both cases is rather annoying, IMHO.
In reply to Re: how to extract data from an array using a condition
by bart
in thread how to extract data from an array using a condition
by kayj
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |