I have a list of items to do something to; and an index that tells me which one to fiddle with next. This index is incremented after each twiddle; and wraps to the start at the end of the list. This is simple to code:
my @items = (1,5,6,3,5,2,4); my $index = 0; sub do_next { do_something($items[$index]); $index = ($index+1) % @items; }
But now we need the ability to vary our list of items --Dave
my @items = (1,5,6,3,5,2,4); my $index = -1; sub add_item { push @items, @_; } sub do_next { if (@items) { $index = ($index+1) % @items; do_something($items[$index]); } else { $index = -1; } }
To understand the additional complexity in do_next, consider the case when: we hit the last item in the list; then add an item; then do_next() again.

Now the last part: deleting items while maintaining the index. The interesting facts are that the list may have duplicates; and if we delete an item that is earlier in the list than the current index, then we must decrement $index. Here's the code:

sub remove_item { my %del = map {$_=>1} @_; my @part_1 = grep { !exists $del{$_} } @items[0..$index]; my @part_2 = grep { !exists $del{$_} } @items[$index+1..$#items]; $index = $#part_1; @items = (@part_1, @part_2); }

Replies are listed 'Best First'.
Re: round-robin on varying sequence
by BrowserUk (Patriarch) on Sep 07, 2002 at 07:03 UTC

    Try this. It does it in 1 pass and creates only 1 temporary list as opposed to 2 half passes, 3 temporary lists (5 if you count the ranges inside the slices), and 2 temporary arrays, so I think it should work out to be faster.

    I realise that I am omitting the lists used to create the hashes, but we both use one, so that balances out.

    I've been trying to benchmark the two variations, but for reasons I haven't yet fathomed, I have yet to succeed. I'll update if I do.

    sub remove_item { my ($i, %del) = (0); @del{@_} = undef; @items = grep { $i++ and !exists $del{$_} and $index -= ($i < $ind +ex) } @items; }

    Update:I got the benchmarking to go.

    #! perl -sw use strict; use Benchmark; my @i = qw( a b c x d e f a g h i j k l z m n o p q r t s t b u v w x +y z ); my @dups = qw( x a z t b ); my $index = 13; my @items = @i; sub remove_item { my %del = map { $_ => 1 } @_; my @part_1 = grep { !exists $del{$_} } @items[0..$index]; my @part_2 = grep { !exists $del{$_} } @items[$index+1..$#items]; $index = $#part_1; @items = (@part_1, @part_2); } sub remove_items { my ($i, %del) = (0); @del{@_} = undef; @items = grep { $i++ and !exists $del{$_} and $index -= ($i < $ind +ex) } @items; } print "@dups\n"; @items = @i; $index = 13; print "$index: $items[$index] : @items\n"; remove_item @dups; print "$index: $items[$index] : @items\n"; print $/ x 2; print "@dups\n"; @items = @i; $index=13; print "$index: $items[$index] : @items\n"; remove_item @dups; print "$index: $items[$index] : @items\n"; print $/ x 2; Benchmark::cmpthese( 1000, { mine => sub { @items = @i; $index=13; remove_items @dups; }, yours => sub { @items = @i; $index=13; remove_item @dups; }, }); __DATA__ # Output C:\test>195796 x a z t b 13: l : a b c x d e f a g h i j k l z m n o p q r t s t b u v w x y z 9: l : c d e f g h i j k l m n o p q r s u v w y x a z t b 13: l : a b c x d e f a g h i j k l z m n o p q r t s t b u v w x y z 9: l : c d e f g h i j k l m n o p q r s u v w y Benchmark: timing 1000 iterations of mine, yours... mine: 0 wallclock secs ( 0.51 usr + 0.00 sys = 0.51 CPU) @ 19 +60.78/s (n=1000) yours: 1 wallclock secs ( 0.64 usr + 0.00 sys = 0.64 CPU) @ 15 +62.50/s (n=1000) Rate yours mine yours 1562/s -- -20% mine 1961/s 25% -- C:\test>

    I agree that this isn't the way I would tackle the overall problem, but I enjoyed playing with it!


    Well It's better than the Abottoire, but Yorkshire!
      Yes, that's faster. But it doesn't work (consider not removing elements when $index=0). When I fixed youre code, it became slower -- though it may be possible to optimize it again:
      # the fix: @items = grep { my $keep = !exists $del{$_}; $i++; $index -= ($i < $index) unless $keep; $keep } @items; # the results Benchmark: timing 100000 iterations of BrowserUk, dpuu... BrowserUk: 13 wallclock secs (13.72 usr + 0.00 sys = 13.72 CPU) @ 72 +88.63/s (n=100000) dpuu: 10 wallclock secs ( 9.88 usr + 0.00 sys = 9.88 CPU) @ 10 +117.36/s (n=100000) Rate BrowserUk dpuu dpuu 10117/s 39% -- BrowserUk 7289/s -- -28%
      To provide fair comparison, here are the results I got running your original benchmark on my system (note the increased iteration count -- my system is 5X faster):
      Benchmark: timing 100000 iterations of BrowserUk, dpuu... BrowserUk: 10 wallclock secs ( 9.26 usr + 0.00 sys = 9.26 CPU) @ 10 +795.64/s (n=100000) dpuu: 10 wallclock secs ( 9.87 usr + 0.01 sys = 9.88 CPU) @ 10 +118.39/s (n=100000) Rate dpuu BrowserUk dpuu 10118/s -- -6% BrowserUk 10796/s 7% --
      in constrast to your +25/-20%. --Dave

      Update: The required optimization is this twisted expression:

      @items = grep { ++$i && !exists $del{$_} || ($index -= $i-1 <= $index) + && 0 } @items;

        Your right. My code, both my offered solution and my benchmark, are deficient. I'll keep my excuses and offer a revised solution, which I believe I have fully verified for compatability with the results of your original.

        The code below runs the remove_item(s) subs against the test data for all possible $index values and compares the results against each other (yours and mine). It also benchmarks each for all input criteria and the results are that my new version now averages a tad under a third (32.9%) faster across all situations.

        I hope you find this useful.

        sub remove_items { my ($i, %del) = (0); @del{@_} = undef; @items = grep { !exists $del{$_} and ++$i or $index -= ($i <= $index), 0 } @items; }

        The code for the full verification and the benchmark result (on my poor li'l 233MHz :) are below

Re: round-robin on varying sequence
by Arien (Pilgrim) on Sep 07, 2002 at 05:41 UTC

    How about chucking the $index and rotating the array (shift and push) every time you get an element (which is now always the first)?

    remove is now as trivial as it ought to be:

    sub remove { my %del = map { $_ => 1 } @_; @items = grep { not $del{$_} } @items; }

    — Arien

      Unless I'm missing something, this doesn't help. You still need to track an index so that you know where to insert items (now via a splice) for add_item(). Once you've rotated the list, you can no longer simply append to the end. --Dave
Re: round-robin on varying sequence
by Zaxo (Archbishop) on Sep 06, 2002 at 23:35 UTC

    Did you consider using splice for removal? I think it would be much easier.

    After Compline,
    Zaxo

      Yes, I did. But splice is not well suited to removing a random set of elements from a list. It works, but you end up using splice to remove single elements at a time. The grep approach appears, IMHO, much more elegant (though I do have duplicated code in the greps). Perhaps you could code up a version using splice to demonstrate why you would prefer it -- then we can benchmark it. --Dave

        Agreed, splice was the wrong idea. How do you like this one?

        sub remove_items { $index -= grep { $_ < $index } @_; delete @items[@_]; @items = grep {defined} @items; }

        I dislike the use of broader scoped $index and @items in this.

        After Compline,
        Zaxo

Re: round-robin on varying sequence
by BrowserUk (Patriarch) on Sep 08, 2002 at 01:42 UTC

    My last contribution to this, just cos I am playing with closures and i was having fun.

    The following is an "OO" solution, with an interesting testcase to prove it works.

    #! perl -sw use strict; sub roundRobin { my @q; my $next = -1; my %self = ( values => sub { return @q[@_]; }, size => sub { return scalar @q; }, pos => sub { return 0+$next; }, add => sub { return push @q, @_; }, del => sub { my ($i, %del) = (0); @del{@_} = undef; @q = grep { exists $del{$_} ? ($next -= $i <= $next) && 0 +: ++$i } @q; return 1; }, next => sub { return undef if not @q; ++$next; $next %= @q; return $q[$next]; }, shuffle => sub { my $t; $t = $_ + rand @q - $_ and @q[$_, $t] = @q[$t, $_] for (0. +.$#q); }, ); return %self; } my %rr = roundRobin(); $rr{add}( 'a' .. 'z' ); print $rr{size}(), $rr{values}(0..$rr{size}()-1), $/; $rr{add}( 'k' .. 'q' ); print $rr{size}(), $rr{values}(0..$rr{size}()-1), $/; $rr{shuffle}(); while ( my $item = $rr{next}() ) { my $pos = $rr{pos}(); print " @{[$rr{values}( 0 .. $pos -1 )]}". "<@{[$rr{values}($pos)]}>". "@{[$rr{values}($pos+1..$rr{size}()-1 )]}"; if (rand(8) < 1) { my $new = chr(95+rand 26); $rr{add}( $new ); print "\t --added ", $new; } if (rand(8) < 1) { my $toDelete = $item; $toDelete = do { $rr{values}( rand( $rr{size}()) ) } until $toDelete ne $item; print "\t--deleting: '", $toDelete; $rr{del}( $toDelete ); } print $/; select undef, undef, undef, .5; }

    Its worth running just to see the testcase in action, leastwise I think so:)

    I've attach a short snippet of output here.