in reply to Re^7: Divide array of integers into most similar value halves
in thread Divide array of integers into most similar value halves

Sure!!! If
@array = (40,17,40,30,40,25,40); is given

something like

@subarray1 = (40,40,30)
@subarray2 = (40,40,17,25)

should be returned.

I guess you already have your own sources, but for intersection of arrays I always use a function from PerlFAQ 4:
@union = @intersection = @difference = (); %count = (); foreach $element (@array1, @array2) { $count{$element}++ } foreach $element (keys %count) { push @union, $element; push @{ $count{$element} > 1 ? \@intersection : \@difference } +, $element; }

Let me know if you need more examples, or what kind of them are you looking for

Replies are listed 'Best First'.
Re^9: Divide array of integers into most similar value halves
by BrowserUk (Patriarch) on Sep 04, 2008 at 16:11 UTC
      Nice,

      But I found some cases (very few, like 0.001%) where it does not work, ie. @array=(43,44,43);
      It would return (43,43) and (43,44).
      Don't bother if you feel is gonna be a big problem. It's already good and I can just use it in combination with the other scripts that have been dropped in the conversation and choose the best answer.
      My problem is I can't modify it myself due to my low understanding of the code.
      Grrrr...someday.

      Thanks a lot.
      Cheers.
      Pepe.

      I know about FAQ code, but it's very convenient if you can spend the extra computing time and memory, but not the programming time.

        But I found some cases (very few, like 0.001%) where it does not work, ie. @array=(43,44,43); It would return (43,43) and (43,44).

        Then it's not right and needs to be fixed!

        I know about FAQ code, but it's very convenient if you can spend the extra computing time and memory, but not the programming time.

        By all means go that route. You did say that you are limited to ~100 elements, so the following will not be a concern to you.

        The problem is that the FAQ conflation means that in addition to storing the original arrays, you need to store duplicates of both their intersections and differences in addition to the counting hash. For small datasets it's not a problem, but once they start getting bigger it becomes a problem.

        Each of these is a separate problem and should be deault with separately (in the FAQ or modules). Computing both (all three if you include union) in a single pass is an optimisation--but only if you need them all.

        I also took a look at the local categorised answer for this, but that doesn't handle duplicates either. I'll get back to you when I've solved the problem.

        I'm also on the track of an improvement to the algorithm. I noticed that it converges on a pretty good solution with very few iterations (often <100), but then no matter how many more iterations it never improves further.

        However, on a different run with the same values, it will achieve a better solution, again arriving at it quickly and then never improving. and yet, sometime a thord run will find a further improvement which again is arrived at quickly. The key seems to be the state of the PRNG. Some starting points produce better results than others regardless of the number of iterations you apply.

        This is where most of my effort has been going. I've been trying to analyse the algorithm to work out why once a non-optimal solution is arrived at (very quickly) even huge numbers of further iterations won't improve it. That required me to come up with a fast and repeatable shuffle implementation (and is the subject of another thread).

        It's obviously similar to a "local minima" in a genetic algorithm, but I'm not yet seeing the pattern.

        I'll get back to you when I have made progress on either problem.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        Okay. Try this version.

        1. It (finally) corrects the duplicates problem (with many thanks to betterworld++).
        2. It will find the optimum solution much more often with far fewer iterations.

          In most tests I've run, a limit of 10* the number of elements finds the optimum even for large arrsys of widely spaced numbers.

          Most of my tests having been run using a custom shuffle that uses Math::Random::MT, but that was mostly to achieve repeatability. I see no reason that you will not see similar results using your native rand() and List::Util::shuffle().

        sub partition { my( $limit, $aRef ) = @_; my @in = sort{ $a <=> $b } @$aRef; my $target = sum( @in ) >> 1; my( $best, @best ) = 9e99; my $soFar = 0; my @half; for( 1 .. $limit ) { # printf "%6d : [@half] [@in] [@best]\n", abs( $soFar - $target + ) if $V; $soFar += $in[ 0 ], push @half, shift @in while $soFar < $targ +et; return( \@half, \@in ) if $soFar == $target; my $diff = abs( $soFar - $target ); ( $best, @best ) = ( $diff, @half ) if $diff < $best; $soFar -= $half[ 0 ], push @in, shift @half while $soFar > $ta +rget; return( \@half, \@in ) if $soFar == $target; $diff = abs( $soFar - $target ); ( $best, @best ) = ( $diff, @half ) if $diff < $best; srand( $_ ); shuffle @in; } my %seen; $seen{ $_ }++ for @best; return \@best, [ grep{ --$seen{ $_ } < 0 } @$aRef ]; }

        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.