in reply to Re: algorithm for 'best subsets'
in thread algorithm for 'best subsets'

And here's a sub version that I've actually been using for testing, using the test data code you posted. It finds 3-at-once in under 6 seconds and 4-at-once in under 11.5 seconds on my laptop, using the supplied test data.
sub countcomb { my $nwordsatonce = shift; my ($k,$v); my %totals = (); local $" = ' '; # just in case while (($k,$v) = each %Items) { next unless $nwordsatonce <= @$v; do {$totals{"@$_"} += 1;} for combine($nwordsatonce,sort @$v); } my @comb = sort {$totals{$b} <=> $totals{$a} or $a cmp $b} keys %tot +als; my $topn = 5; my $toptot = $totals{$comb[$topn-1]}; while ($toptot <= $totals{$comb[$topn]}) {$topn++;} print "Top $nwordsatonce - word combinations: (cuttoff $toptot)\n"; do {print "$_ ($totals{$_})\n";} for @comb[0..$topn-1]; }
Unfortunately, it does have a tendency to die of out-of-memory errors if you up either the number of keywords or the average length of a set, and tieing %totals to a db file doesn't seem to prevent it, so there must be some other sort of memory leak going on. (I suppose that it could also be the sort exploding, but working around that should be relatively straightforward)
-- @/=map{[/./g]}qw/.h_nJ Xapou cets krht ele_ r_ra/; map{y/X_/\n /;print}map{pop@$_}@/for@/