in reply to Re: Help with parsing through a comma delimted file
in thread Help with parsing through a comma delimted file

In addition to my comments to jlawrenc I would like to point out that in Perl use of explicit indexing is usually unnecessary, and by avoiding it you can seriously reduce the potential for error. Instead in this case you can use push. The section just to read the file would then reduce down to:
while (<DATA>) { chomp; push @record, [split(/,/, $_, -1)]; }
which is considerably shorter, faster, and reduces the chance of error.

Also I think that reading the file and sorting it are two different things. You are likely to want to read the file into data for lots of reasons. You are likely to later discover the need to sort the file in lots of ways. Why not have two functions?

Of course whenever I see a CSV format with the field names not in the first row, I tend to get upset. And I really prefer hashes. Therefore the above snippet of code would set off a bunch of danger signs for me. Certainly any data format that I have any say in will include the columns in the definition of the format, and code that handles it will be expected to handle columns moving around. In this simple case a function to read the format could look like this:

use strict; use Carp; # Time passes... sub read_csv { my $file = shift; local *CSV; open (CSV, $file) or confess("Cannot read '$file': $!"); my $header = <CSV>; chomp($header); my @fields = split /,/, $header; # You could do an error check for repeated field names... my @data; while (<CSV>) { chomp; my $row; @$row{@fields} = split(/,/, $_, -1); push @data, $row; } return @data; }
I keep meaning to clean up and then post a more robust version of this that handles quoting, fields with embedded commas and returns, can be used either for slurping (like this) or for a stream-oriented file...