in reply to Finding Keys from data

Finding an optimal solution may not be possible (without searching the complete space of 2^100 possibilities) but you might try something like this:

1) For each column count the number of different values in it (easily done with hashes)

2) Sort that column list lowest number first

3) Set a key of all columns

3) Foreach column on that list try to remove it from the key and see if two lines have the same key now. if yes, re-add the column to your key, otherwise leave it off

This will give you exactly one solution. If you want to look for better solutions, you might rerun the algorithm, but scramble the sorted list of columns somewhat to get at different solutions. You could even use a totally random column list, but most likely you will get too many suboptimal solutions presented that way

Replies are listed 'Best First'.
Re^2: Finding Keys from data
by roboticus (Chancellor) on Apr 01, 2011 at 10:57 UTC

    jethro:

    I like that approach, but rather than excluding columns one-by-one, I'd include them one-by-one, like this:

    $ cat 896650.pl #!/usr/bin/perl use strict; use warnings; # Generate values my @cols = ( 0 .. 10 ); my $rows=50000; my @data = ( [ map { int 100*rand } @cols ] ); my @gen = ( map { [ 100*rand, int 100*(1+$_*$_*$_*rand) ] } @cols ); for my $r (1 .. $rows) { for my $c ( @cols ) { $data[$r][$c] = int( $data[$r-1][$c] + $gen[$c][0]*rand) % ($gen[$c][1]); } } # Count distinct column values my %H; for (@data) { my @v=@$_; for my $c (@cols) { $H{$c}{$v[$c]}++; } } # Sort so most diverse comes first my @colrank = sort { $$b[1] <=> $$a[1] } map { [ $_, scalar keys %{$H{$_}} ] } @cols; my @colOrder = map { $$_[0] } @colrank; for my $r (@colrank) { print "col $$r[0] has $$r[1] values\n"; } print join(", ", @colOrder), "\n"; print scalar(@data), " rows to process.\n"; my @tmpdata = @data; my @keycols = ( shift @colOrder ); while (keys %H < @tmpdata) { print scalar(@tmpdata), " rows to process.\n"; %H=(); for (@tmpdata) { my $key=join(".", @$_[@keycols]); $H{$key}++; } print "cols(", join(",",@keycols), ") give ", scalar(keys %H), " rows.\n"; push @keycols, shift @colOrder; } $ perl 896650.pl col 10 has 37754 values col 9 has 29806 values col 8 has 29772 values col 7 has 22835 values col 6 has 16081 values col 5 has 10136 values col 3 has 2726 values col 4 has 559 values col 2 has 216 values col 1 has 176 values col 0 has 1 values 10, 9, 8, 7, 6, 5, 3, 4, 2, 1, 0 50001 rows to process. 50001 rows to process. cols(10) give 37754 rows. 50001 rows to process. cols(10,9) give 49996 rows. 50001 rows to process. cols(10,9,8) give 50001 rows.

    However, I'd wonder what the utility of it would be...

    I was thinking that one possible improvement of my code would be to remove the "duplicate" rows after determining each key, then redo the column count so you could get the best remaining key from the list. It certainly wouldn't be the optimal solution, but I'd bet it gets pretty close.

    ...roboticus

    When your only tool is a hammer, all problems look like your thumb.

      My reason to do it backwards was because I thought that checking for whether a key could be removed or not was easier than checking if a key should be added. Because in the first case you only have to check whether one key is duplicate now (easy to do with a hash. In the second case you have to check whether the addition of the key improves anything.

      This is a big problem with your solution now. Say two columns have the same values, but with many different values so they are both high on the @colOrder list. They will both be added even though only one of them would improve the key quality, the other is completely useless.

      You also talk about an improvement by removing "duplicate" rows. Did you mean "singular" rows? I.e. a row that is already singled out/identified completely by the key isn't important anymore for finding relevant keys and could be removed. This is one possible way to check for whether a key improves things.

      One factor that also should be taking into account, if you want to determine whether adding or removing is better, is how many keys the solution likely needs. If the data is nearly as random as BrowserUKs test data, obviously the solution will only need a few columns. But if many columns are linked to each other and have only a few different values, the key needs many columns

      Though the more columns the key needs the worse it is as a key ;-). A key that needs 50 of the 100 columns already uses half the space of the database itself

        jethro:

        Yes, I meant singular rows.

        Now that you mention it, I see your point regarding trying to remove columns rather than add them. I'd try to add the refinement I mentioned earlier, but I suspect that it wouldn't give anything significantly better than what you've described.

        ...roboticus

        When your only tool is a hammer, all problems look like your thumb.

        Update: Rephrased a bit.