I was asked by sporty to post this, so I am. It is not the fastest implementation of this algorithm, but it is the clearest one I could find.
use strict; use warnings; use constant LEFT => -1; use constant RIGHT => 1; print "Enter N: "; chomp( my $n = <> ); my (@dir, @p); for (1 .. $n) { $dir[$_ - 1] = LEFT; $p[$_ - 1] = $_; } my $count = 0; while (1) { printf "%03d: ", ++$count; print "$p[$_ - 1] " for 1 .. $n; print $/; # 1) Find the largest element that has a smaller element in the di +rection # of its arrow, aka the largest "mobile" element. my ($i, $val) = find_max_mobile(\@p, \@dir); # This means we couldn't find an element that's "mobile". That mea +ns we've # run out of permutations. last if $i == -1; # 2) Swap it and the element it's pointing to. We also have to swa +p the # arrows to keep everything in sync. my @indices = ($i, $i + $dir[$i]); @p[ @indices ] = @p[ reverse @indices ]; @dir[ @indices ] = @dir[ reverse @indices ]; # 3) Reverse the direction of all arrows for elements bigger than +the # currently largest "mobile" element. for (0 .. $#p) { $dir[$_] = -$dir[$_] if $p[$_] > $val; } } sub find_max_mobile { my ($p, $dir) = @_; my ($i, $n) = (-1,0); # The first element is the max only if its arrow is pointing RIGHT ($i, $n) = (0, $p->[0]) if $dir->[ 0 ] == RIGHT && $p->[0] > $p->[1]; # A middle element is the max only if its arrow points to a smalle +r value for ( 1 .. $#$p-1 ) { if ( $p->[$_] > $p->[$_ + $dir->[$_]] ) { ($i, $n) = ($_, $p->[$_]) if $n < $p->[$_]; } } # The last element is the max only if its arrow is pointing LEFT if ( $dir->[ $#$p ] == LEFT && $p->[$#$p] > $p->[$#$p - 1] ) { ($i, $n) = ($#$p, $p->[-1]) if $n < $p->[$#$p]; } return ($i, $n); }

Being right, does not endow the right to be rude; politeness costs nothing.
Being unknowing, is not the same as being stupid.
Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence.
Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.

Replies are listed 'Best First'.
Re: Steinhaus - Johnson - Trotter Algorithm for Permutations (explain)
by tye (Sage) on Dec 28, 2004 at 17:44 UTC

    The point of the algorithm is to only swap two elements at each step (in hopes of producing the most efficient permutation generator).

    The basic algorithm is that if you have your original list of elements, @e, then you start by swapping the last element with the element that is currently to it's left. You continue this until the formerly last element is now first.

    Once you've done that, you treat the remaining elements as a list of $n-1 items and do the same to it (swap the last element with the one to its left), but you only do one step on this sublist.

    Then you swap the last element of @e (currently the first element of the permuted list) with the element to its right (in the permuted list). Keep doing this until the element is at the end again.

    Then treat the remaining elements as their own list again and perform the next step on them. If the next step would involve changing directions, then you instead treat the remaining $n-2 elements as their own list and do that sublist's next step, etc.

    So you move the last element, via swapping pairs of elements, from the end to the beginning, to the end, etc. When you reach either end, just swapping again would repeat a previous permutation so you need to permute the remaining items. And you do that by following the same pattern.

    So it is a like a proof by mathematical induction. You presume that you have a way to permute $n items using only one transposition (swapping of a pair of elements) between each permutation, then with $n+1 items it is easy to produce such a sequence again. You just use swapping to cycle the $n+1st item from end to end in the list and transpose the other $n items using the previous (unspecified) algorithm when required. But the previous algorithm is just the same until you get to just one item (which can't be swapped) and you are done.

    I coded up this Perl version of the algorithm from scratch.

    Without cheating, try to predict what the last permutation will be for each size of list.

    - tye        

      A nice explanation of the algorithm; thanks. However, this part isn't quite right.
      > # @p is the position of each element in @o, > # that is, @o = (0..$#e)[@p] > my @p= (0..$#e); > # Note that it is also true that @p = (0..$#e)[@o].
      In the language of group theory, (0..$#e) is the identity, while @o, @p are inverses of each other. Thus @p[@o]=@o[@p]=(0..$#e). For example,
      # an arbitrary permutation @o = @permutation = (1,2,0); # position of 0 is in $o[2], so $p[0]=2. # This leads to @p = @positions = (2,0,1); # And then print " o[p] = (@o[@p]) = p[o] = (@p[@o]) \n";
      which outputs
      o[p] = (0 1 2) = p[o] = (0 1 2)
      Regards,

       barrachois