Not bad. I would change the way the keys are generated slightly, though. This change allows the data vendor to do weird things, like change the order of headers or add / remove columns without affecting your underlying process (unless they completely balls it up).
my %item_map = ( 'CO.NAME' => 'name', 'MARKETCAP' => 'cap', 'INDUSTRY' => 'ind' ); chomp(my $headers = <DATA>); my @headers = map {s/\s//g; ($item_map{$_} || lc $_)} split /\s*\|\s*/ +, $headers;
Yes, purists may point out that this is serious over-engineering. I would retort that if they had to maintain loading filesets from a couple of hundred different FTSE feeds they might rethink their position (though to be fair, FTSE are a lot better these days)
In reply to Re^2: Reading Text File into Hash to Parse and Sort
by SimonPratt
in thread Reading Text File into Hash to Parse and Sort
by Perl_Derek
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |