Do you know of any module that can achieve this with quite long hashes?
update 2: the solution below the 1th update doesn't weed out the key 'technlogy' which is found as partial key in 'automation technology' and 'automation technology process'. B_TREE partial key matching isn't useful, since the match starts at the beginning of the key. DB_File is good to store huge simple hashes and quite fast, so this statement remains.
Here's a solution:
use strict; use warnings; use Data::Dump; my %h = ( 'rendition' => '3', 'automation' => '2', 'saturation' => '3', 'mass creation' => 2, 'technology' => 5, 'process' => 6, 'creation' => 7, 'saturation process' => 9, 'automation technology' => 2, 'automation technology process' => 3, ); dd 'before', \%h; for (keys %h) { if (/\s/) { my @l = split ' '; for my $k (map {my $i=$_;map{join" ",@l[$i..$_]}$i..$#l}0..$#l +) { delete $h{$k} unless $k eq $_; } } } dd 'after', \%h; __END__ ( "before", { "automation" => 2, "automation technology" => 2, "automation technology process" => 3, "creation" => 7, "mass creation" => 2, "process" => 6, "rendition" => 3, "saturation" => 3, "saturation process" => 9, "technology" => 5, "technology process" => 5, }, ) ( "after", { "automation technology process" => 3, "mass creation" => 2, "rendition" => 3, "saturation process" => 9, }, )
Skip the rest of this reply.
update: wrong, scratch hat, we need to use B_TREE with partial match; wait for next update...
DB_File, using the DB_BTREE format is handy for that. Since keys are stored in lexical order, it suffices to iterate over the keys, and delete the previous key if it matches the current one:
use strict; use warnings; use DB_File; use Fcntl; use File::Temp qw(:POSIX); use Data::Dumper; $Data::Dumper::Indent = 1; my $filename = tmpnam(); # temporary file my %h; # tied DB_File hash tie %h, "DB_File", $filename, O_RDWR|O_CREAT, 0666, $DB_BTREE or die "Cannot open $filename: $!\n"; %h = ( 'rendition' => '3', 'automation' => '2', 'saturation' => '3', 'mass creation' => 2, 'automation technology' => 2, 'automation technology process' => 3, ); my $prev; for (keys %h) { delete $h{$prev} if $prev && /^$prev/; $prev = $_; } print Dumper(\%h); # cleanup untie %h; unlink $filename; __END__ $VAR1 = { 'automation technology process' => '3', 'mass creation' => '2', 'rendition' => '3', 'saturation' => '3' };
DB_File uses a file on disk, so quite long hashes are possible. Since keys used in a for loop just iterates over the keys, there's no need to retrieve all the keys and sort them. BTREE has them sorted already.
In reply to Re: retain longest multi words units from hash
by shmem
in thread retain longest multi words units from hash
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |