in reply to Decomposing sum to unique sets of summands

sub perms { my ($n, $min) = @_; $min //= 2; return [] if $n <= 0; return [$n] if $n <= $min; my @r; foreach my $i ($min .. $n) { my $left = $n - $i; next if $left < $min; my @p = perms $left, $i; foreach my $p (@p) { push @r, [@$p, $i]; } } @r; } say "[@$_"] for perms 7; __END__ [2 3 2] [3 4] [2 5]
It's not fast, but that's because of the enormous amount of ways to decompose numbers.

Replies are listed 'Best First'.
Re^2: Decomposing sum to unique sets of summands
by dHarry (Abbot) on Oct 28, 2008 at 14:48 UTC

    The post says:

    And this needs to be very fast since in use it's going to have to handle sums of up to 10^6.

    I don’t think your program can be used for this; it only works for small integers.

    The so-called partition function p(n) represents the number of possible partitions of a natural number n (distinct and order independent). Unfortunately p(n) grows rapidly, see Partition. Try to run the program on n=1000.

      Yes, I don't claim it's fast. And I don't think you can do it fast in Perl - you'll have to use some C or other fast language. And even then, you'd still have to convert all the numbers back into Perl numbers to process them.

      For the interested, the number of such partitions is A083751 in the OEIS.

      I don’t think your program can be used for this; it only works for small integers.
      It seems unfair to call this a problem with JavaFan's solution. As you point out, the bottleneck is mathematical—there are so many partitions—rather than programmatic, so any program that does what the poster seems to want will be slow on large numbers. Maybe the poster only needs a few partitions, or knows something about the data-sets, so that some optimisations can be applied?