G'day The_Dj,
Although not stated, I'm assuming all id, sn, etc. values are unique. If that's not the case, neither your current solution nor my alternative suggestion will work properly.
Instead of recreating the entire hash multiple times, consider just having a single hash with all the data and then simple mappings of sn to id (extending for future requirements).
Here's a quick example:
#!/usr/bin/env perl use strict; use warnings; my %dat_by_id = ( 1 => {id=> 1, sn => 'a', more => 'foo'}, 2 => {id=> 2, sn => 'b', more => 'bar'}, ); my %map_sn_to_id = map +($_->{sn} => $_->{id}), values %dat_by_id; print "SN for ID[1]: $dat_by_id{1}{sn}\n"; print "ID for SN[b]: $dat_by_id{$map_sn_to_id{b}}{id}\n"; print "MORE for SN[a]: $dat_by_id{$map_sn_to_id{a}}{more}\n"; # Subsequent requirements, e.g. my %map_more_to_id = map +($_->{more} => $_->{id}), values %dat_by_id; print "ID for MORE[foo]: $dat_by_id{$map_more_to_id{foo}}{id}\n"; print "SN for MORE[bar]: $dat_by_id{$map_more_to_id{bar}}{sn}\n";
Output:
SN for ID[1]: a ID for SN[b]: 2 MORE for SN[a]: foo ID for MORE[foo]: 1 SN for MORE[bar]: b
Having a single data source will reduce the chances of errors and should make maintenance and debugging (if necessary) easier.
I see you've used "map BLOCK LIST" and I'm aware that's considered a Best Practice; however, "map EXPR, LIST" is faster and may make a difference, especially when you're dealing with millions of data elements. Use Benchmark to test. See map for more on these two forms as well as an explanation of the unary plus, "map +(...", I used (if you're unfamiliar with that syntax).
I've only shown a barebones technique. For production usage, I'd suggest setting up a series of functions, e.g. get_id_for_sn($sn), instead of having to continually hard-code an equivalent $dat_by_id{$map_sn_to_id{$sn}}{id}.
— Ken
In reply to Re: Too Many IDs
by kcott
in thread Too Many IDs
by The_Dj
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |