I'm doing a data conversion project. I've written a lot of this kind of script. This first script reads in a comma-separated-value file, which contains lines which look like:
productid,long description goes here,acctnumberIt reads the file and creates two associative arrays for the second and third fields, keyed by the first field
#!/usr/bin/perl use warnings; use strict; use vars qw/ %descs %salesAccts /; open(UPRODFILE,'<uprod.csv'); while (<UPRODFILE>) { chomp; my $prodID = ""; my $desc = ""; my $salesAcct = ""; ($prodID,$desc,$salesAcct) = split(/,/); # print "$prodID -- $desc -- $salesAcct \n"; $descs{$prodID} = $desc; $salesAccts{$prodID} = $salesAcct; } close(UPPRODFILE); #foreach my $i (sort keys %descs) { # print "$i == $descs{$i} == $salesAccts{$i} \n"; #} 1;
Then there's a second script that reads a second file, looks up each line in the two associative arrays, and creates a third file in the same format as the first.
#!/usr/bin/perl require 'uprod.pl'; while (<>) { chomp; print "$_, $descs{$_}, $salesAccts{$_}\n" }
This one runs in sloppy mode because I didn't want to generate warnings for cases where there was missing data. Usually I have warnings and strict turned on in every script - it just promotes better coding.
N.B. This requires that none of the fields in the first file contain embedded commas. If they do, this would have to be re-written using the Text::CSV module, which I have used on occasion, but avoid when unnecessary.
For me this is much faster and more predictable than using MS Excel with vlookup or something like that.
In reply to quickie table lookup by dreamgear
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |