in reply to getting unique AOH from nonunique AOH ... or hash if it is better approach

If you need to retain the same order in your unique array as the original a different approach is required as hashes are inherently unordered. The grep uses the %seen hash to only pass through data elements that it hasn't already seen, thus removing duplicates from the stream. The first (lower) map takes each line read from the input file and removes any trailing white space, including line terminators (I do this instead of chomping in case there are differing numbers of trailing spaces in the data). The grep removes duplicates as already described and the second (upper) map creates an anonymous array of data split on commas.

$ perl -Mstrict -Mwarnings -MData::Dumper -E ' open my $inFH, q{<}, \ <<EOF or die $!; server1,user1,% server1,user1,db1 server1,user2,% server1,user3,% server1,user1,% server1,user2,% server1,user2,db2 server1,user3,db3 server1,user3,% server2,user1,% server2,user1,db1 server2,user2,% server2,user3,% server2,user1,% server2,user2,% server2,user2,db2 server2,user3,db3 server2,user3,% server3,user1,% server3,user1,db1 server3,user2,% server3,user3,% server3,user1,% server3,user2,% server3,user2,db2 server3,user3,db3 server3,user3,% EOF my @logins = do { my %seen; map { [ split m{,} ] } grep { ! $seen{ $_ } ++ } map { s{\s*$}{}; $_ } <$inFH>; }; print Data::Dumper->Dumpxs( [ \ @logins ], [ qw{ *logins } ] );' @logins = ( [ 'server1', 'user1', '%' ], [ 'server1', 'user1', 'db1' ], [ 'server1', 'user2', '%' ], [ 'server1', 'user3', '%' ], [ 'server1', 'user2', 'db2' ], [ 'server1', 'user3', 'db3' ], [ 'server2', 'user1', '%' ], [ 'server2', 'user1', 'db1' ], [ 'server2', 'user2', '%' ], [ 'server2', 'user3', '%' ], [ 'server2', 'user2', 'db2' ], [ 'server2', 'user3', 'db3' ], [ 'server3', 'user1', '%' ], [ 'server3', 'user1', 'db1' ], [ 'server3', 'user2', '%' ], [ 'server3', 'user3', '%' ], [ 'server3', 'user2', 'db2' ], [ 'server3', 'user3', 'db3' ] ); $

I hope this is of interest.

Cheers,

JohnGG