in reply to Re: NP-complete sometimes isn't (A benchmark)
in thread NP-complete sometimes isn't

But! it looks like FunkyMonk's code doesn't get the best partition always:

use List::Util qw(sum); use List::MoreUtils qw(first_index); #my @numbers = map { rand() * 100 } 1 .. 5; #my @numbers = (8,14,32,29); my @numbers = @ARGV; my @b = split_evenly( \@numbers ); sub say { print @_,"\n" } say "First container: sum(@{$b[0]}) = ", sum @{$b[0]}; say "Second container: sum(@{$b[1]}) = ", sum @{$b[1]}; sub split_evenly { my @numbers = reverse sort { $a <=> $b } @{+shift}; my $target = sum(@numbers) / 2; say "Target is $target"; my @b; while ( 1 ) { my $index = first_index { $_ <= $target } @numbers; last if $index < 0; $target -= $numbers[$index]; push @b, splice @numbers, $index, 1; } return \@b, \@numbers; }
perl funky.pl 400 402 521 735 758 191 307 679 776 877 Target is 2823 First container: sum(877 776 758 402) = 2813 Second container: sum(735 679 521 400 307 191) = 2833

but a better partition is

First container: sum(400 402 521 735 758) = 2816 Second container: sum(191 307 679 776 877) = 2830

update: ah, of course. That is what your "Difference" output field reveals....

Replies are listed 'Best First'.
Re^3: NP-complete sometimes isn't (A benchmark)
by BrowserUk (Patriarch) on Sep 02, 2008 at 11:06 UTC

    Indeed, but I don't think that he ever made that claim? (By constrast, tilly (TTBOMK correctly) did.)

    And neither will mine, always. Much of the time it does. But, even with the same inputs, due to the semi-random nature of the algorithm, it occasionally will give up trying before it finds the optimum solution. I've never seen it be very far away from optimum, but it doesn't always find it within the specified iterations bound.

    But, given that it will most times partition a 1 million value dataset within 8 to 10 seconds:

    Testing buk with 1000000 random values in the range 0 .. 1e4 Differ +ence:= 0; took 8.156250 seconds

    Where a brute force solution for 100 values would theoretically take around 17 years*, the occasional imperfect solution is fair trade I reckon.

    (*)I tried to use the same math to calculate the theoretical time for 1 million value dataset, but nothing I have access to can calculate 2**1e6 in a timeframe that I was prepared to wait for :)

    • 2**100 ~ 10e30;
    • 2**1000 ~ 10e300;
    • 2**10000 ~ 10e3000;
    • 2**100000 ~ 10e30000;
    • So, by projection: 2**1000000 ~ 10e300000;

      Which if my sleep deprived brain isn't dissin' me, that represents something like 100,000 years of processing.

      Will I trade that for a 10 second response that's occasionally non-optimal. Sure :)


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      (*)I tried to use the same math to calculate the theoretical time for 1 million value dataset, but nothing I have access to can calculate 2**1e6 in a timeframe that I was prepared to wait for :)
      here it is, if you happen to care about the answer. But if your goal is to just rewrite that number as a power of 10, you don't need to calculate it in the first place:
      2 ** $y == 10 ** $x log(2 ** $y) == log(10 ** $x) $y * log 2 == $x * log(10) $x == $y * log(2) / log(10)

      ... which also explains why your estimate is right ;-)

      Which if my sleep deprived brain isn't dissin' me, that represents something like 100,000 years of processing.

      More in the order of 10e300000 years. The number of seconds per year is about 3*10e7, so it's something like 10e299993 years. In comparison, the universe is about 10e10 years old. When you can do 10e10 calculations per second you're still at 10e299983.