in reply to Re^2: Random data generation.
in thread Random data generation.

I wonder if you've overlooked my solution, or if there's anything wrong with it.

Not only is it simpler (both less lines of code and conceptually easier to understand, IMHO), it's also consistently faster than what you found to be the fastest:

$ ./846784.pl Rate salva almut salva 881/s -- -15% almut 1037/s 18% --

(tested with v5.10.1, x86_64-linux-thread-multi)

#!/usr/bin/perl use Benchmark "cmpthese"; my $len = shift @ARGV || 12; my $max_reps = shift @ARGV || 2; sub almut { my @set = qw( A B C D E F ); for (1..100) { my $result = ''; while (length($result) < $len) { my $char = $set[ rand @set ]; next if substr($result, -$max_reps) eq $char x $max_reps; $result .= $char; } } } sub salva { my $set = [ qw( A B C D E F ) ]; for (1..100) { my $out = ''; my $last_ix = -1; my $reps = 0; for (1..$len) { my $ix; if ($reps >= $max_reps) { $ix = int rand(@$set - 1); $ix++ if $ix >= $last_ix; } else { $ix = int rand(@$set); } if ($ix == $last_ix) { $reps++; } else { $last_ix = $ix; $reps = 1; } $out .= $set->[$ix]; } } } cmpthese(-1, { almut => \&almut, salva => \&salva, });

Replies are listed 'Best First'.
Re^4: Random data generation.
by BrowserUk (Patriarch) on Jun 27, 2010 at 18:59 UTC
    I wonder if you've overlooked my solution, or if there's anything wrong with it.

    Indeed I did, as it was posted as a reply to someone else. And no, there is nothing wrong with it.

    It beats out salva's (just), and is a simple, clean, and direct implementation of the requirements. I have my winner. Thank you.

    Now I've seen it, I'm somewhat chagrined that I didn't see it for myself. It would still be my winner even if it were slightly slower than the flags & resets versions.

    Yours is xA below:

    [19:48:48.65] c:\test>junk -I=3 ABCDEF 12 Rate x0 x3 x9 x8 x5 x4 x6 x7 xA x0 9794/s -- -8% -13% -19% -35% -36% -38% -43% -51% x3 10606/s 8% -- -6% -12% -30% -31% -33% -39% -46% x9 11286/s 15% 6% -- -6% -25% -26% -29% -35% -43% x8 12063/s 23% 14% 7% -- -20% -21% -24% -30% -39% x5 15122/s 54% 43% 34% 25% -- -1% -5% -12% -24% x4 15340/s 57% 45% 36% 27% 1% -- -3% -11% -23% x6 15863/s 62% 50% 41% 32% 5% 3% -- -8% -20% x7 17248/s 76% 63% 53% 43% 14% 12% 9% -- -13% xA 19822/s 102% 87% 76% 64% 31% 29% 25% 15% -- [19:49:50.63] c:\test>junk -I=3 ABCDEF 120 Rate x0 x3 x4 x8 x9 x6 x5 x7 xA x0 275/s -- -20% -21% -80% -80% -86% -86% -91% -92% x3 341/s 24% -- -2% -75% -75% -82% -83% -89% -90% x4 349/s 27% 2% -- -75% -75% -82% -82% -89% -90% x8 1383/s 404% 305% 296% -- -0% -29% -30% -57% -59% x9 1386/s 405% 306% 297% 0% -- -29% -30% -57% -59% x6 1945/s 608% 470% 457% 41% 40% -- -1% -40% -42% x5 1971/s 618% 477% 465% 43% 42% 1% -- -39% -41% x7 3216/s 1071% 842% 821% 133% 132% 65% 63% -- -4% xA 3346/s 1119% 880% 859% 142% 141% 72% 70% 4% -- [19:50:43.49] c:\test>junk -I=3 ABCDEF 1200 Rate x4 x0 x3 x8 x9 x6 x5 x7 +xA x4 3.33/s -- -0% -8% -98% -98% -98% -98% -99% -9 +9% x0 3.33/s 0% -- -8% -98% -98% -98% -98% -99% -9 +9% x3 3.61/s 8% 8% -- -97% -98% -98% -98% -99% -9 +9% x8 137/s 4019% 4019% 3700% -- -8% -25% -27% -61% -6 +3% x9 149/s 4366% 4366% 4020% 8% -- -19% -21% -58% -6 +0% x6 183/s 5392% 5392% 4966% 33% 23% -- -3% -48% -5 +0% x5 189/s 5578% 5578% 5138% 38% 27% 3% -- -46% -4 +9% x7 351/s 10434% 10434% 9618% 156% 136% 92% 86% -- - +5% xA 368/s 10960% 10960% 10104% 169% 148% 101% 95% 5% +--
      The length of the generated data is a parameter that does not modify the ranking because the costs of all the given solutions are linear to it.

      If you change the set size or the maximun number of consecutive repetitions instead, then the ranking is not so stable. The benchmarks below show it.

      I have also added a new solution salva2 (much faster in average) and some variations of almut's one.

      The code:

      And the benchmarking results:

      Note that the salva2 algorithm has some bias. It is able to generate all the valid solutions but not all of them have the same probability.

        Actually, the number of reps allowed is the one parameter that doesn't vary for my scenario. Also, the length of the string will never be less than the size of the set.

        The approach you use in salva2 is very interesting, and does make it come out first in every benchmark I've run. One of my own alternatives used a similar approach: Generate a big string completely at random and then pick out a compliant N-char length lump with a regex and return it. Your innovations of a) making the whole string compliant using a regex, b) retaining the buffer across calls; combine to make the approach viable.

        The funny thing is that I'm already effectively using the latter part in my code (but not the benchmarks), in as much as I need a bunch of short strings, but I'm calling the generator for one long string and then chopping it into bits using unpack "(A$N)*', gen( $N * 100, @set ). That negates a part of salva2's advantage, and also trades the cost of unpack against the cost of multiple calls to the generator.

        Now I have to consider whether your generate a big string and the regex it into compliance approach would still be quicker in-situ. It's not an easy thing to benchmark. Neither is considering the affect it has on the randomness of the result.

        It's quite amazing how many variations can result from an ostensibly simple spec. Many thanks for your input.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        I think there might be a little bug in salva2. I copied the code from Re^5: Random data generation to try my own solution (middle of the pack, not worth posting), but while studying the code, I noticed a potential problem. my $buf is declared just before gen_salva2.

        my $buf; sub gen_salva2 { while (length $buf < $len) { $buf .= $set[rand @set] for (1..$len*100); $buf =~ s/(.)\1{$max_reps,}/$1 x $max_reps/ge; } substr($buf, -$len, $len, ''); }

        So I put it inside sub gen_salva2{} and ran it again. I bencharked only for $elements = 6, $max_reps = 2, to match BrowserUK's original parameters. That and the change to gen_salva2 are the only changes. Notice how the rankings change.

        elements: 6 max_reps: 2 Rate salva2 salva almut_mod3 almut_mod4 almut_mod2 + almut salva2 1077/s -- -98% -98% -98% -98% + -98% salva 54086/s 4922% -- -10% -13% -13% + -17% almut_mod3 59766/s 5450% 11% -- -4% -4% + -8% almut_mod4 61953/s 5653% 15% 4% -- -1% + -5% almut_mod2 62270/s 5682% 15% 4% 1% -- + -5% almut 65222/s 5956% 21% 9% 5% 5% + --

        I thought gen_salva2 got its amazing speed boost by using an already populated $buf. After running the benchmark, I reduced the count to 100 and added a print line before creating $buf to test this theory.

        my $buf; sub gen_salva2 { print "$buf\n"; while (length $buf < $len) { $buf .= $set[rand @set] for (1..$len*100); $buf =~ s/(.)\1{$max_reps,}/$1 x $max_reps/ge; } substr($buf, -$len, $len, ''); }

        This printed a string 1164 chars long: 4235621656212414423622661645561656223662351124233115.... Each time through, it reduces the length by 12, which is then printed. When the string is too small, it generates a new one. The while() loop is simply not executed very often -- about 1 in 96 to 98 times on average.

        --marmot