Hello, kind monks. This has got to be a prime example of the TMTOWTDI-ishness of Perl, but I will ask for comments on what I came up with anyway (and what I came up with was inspired by something half-remembered that I read somewhere once ..):
I want to take a list of URLs and sort them on the filename (string sort, not numeric) but also must get rid of expected duplicates. So I am going for both uniqueness and sorted-ness. My solution, which works well enough, is this:
my %UHash; my $lk; my $r;
for ($itr=0; $itr < @Linked_pics; $itr++) {
$lk = substr( $Linked_pics[$itr], rindex( $Linked_pics[$itr],'/') +1 );
$UHash{$lk} = $Linked_pics[$itr];
}
@Linked_pics = map { $_ = $UHash{$_}; } (sort keys %UHash);
The array $Linked_pics is already loaded with entries going into this routine, of course. Can any of our esteemed Wise Ones see a bloat that ought to be corrected here?
THANKS,
soren andersen Intrepid
In reply to Unique & sorted array optimization? by Intrepid
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |