in reply to File::Sort issues
How to generically compare two CSV files is difficult to answer. It depends on whether or not you can read the entire file into memory, and if their fields match. The very simplest method would be to normalize your CSV files, sort them, and then diff them.
The simplest way of normalizing them is to parse them, and then spit them back out; if you do this with the same module for each (using the same options), theoretically any rows with the same values would output the same.
Normalizing with Text::CSV_XS is straightforward:
#!/usr/bin/perl use Text::CSV_XS; use warnings; use strict; { die("usage: $0 [<file>]\n") if @ARGV > 1; my($file, $fh); if (@ARGV) { $file = $ARGV[0]; open($fh, '<', $file) || die("Unable to open file '$file': $!.\n"); } else { $file = '-'; $fh = \*STDIN; } my $csv = Text::CSV_XS->new({ binary => 1, eol => "\015\012" }); while (my $row = $csv->getline($fh)) { $csv->print(\*STDOUT, $row); } die("Error parsing CSV file '$file': ", $csv->error_diag, "\n") if $csv->error_diag and not $csv->eof; }
(My first pass used *ARGV, but this results in some odd diagnostics and weird edge cases.)
At this point, you simply sort the output. Field values and the header are irrelevant; you're simply trying to make all of your CSV files consistent so diff can make some sense of it.
diff -u <(csv-normalize csv1.csv | sort) <(csv-normalize csv2.csv | sort)
This is the simplest and quickest way of comparing two CSV files. It has the advantage of being able to work on relatively large CSV files quickly, but it won't work if the field layout differs between them.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: File::Sort issues
by aartist (Pilgrim) on Jul 11, 2011 at 23:53 UTC | |
by Somni (Friar) on Jul 12, 2011 at 00:22 UTC |