http://qs1969.pair.com?node_id=213019


in reply to Efficient run determination.

Its all about using the right language for the job.

@res = p_process(' aaaa bbbbccccccccbbb aaaaabbbbbcddddddddddddd +ddddddd'); print Dumper(@res); use Inline C => <<'END_OF_C_CODE'; void p_process(char *s) { char prev = 'Q'; long count = 0; long pos = 0; long i=0; AV *array; Inline_Stack_Vars; Inline_Stack_Reset; while(*s != 0) { if (count==0) { pos = i; prev = *s; count = 1; } else if (prev == *s) { count++; } else { array = newAV(); av_push(array,newSVpvn(&prev,1)); av_push(array,newSViv(pos)); av_push(array,newSViv(count)); Inline_Stack_Push(newRV_inc(array)); pos=i; prev = *s; count=1; } i++; s++; } Inline_Stack_Done; } END_OF_C_CODE

(Note, if you have a very long string, you should look at shifting things into a top-level array instead of onto the stack directly, since the stack might run out of room)

This one doesn't handle char 0, since that is the C string termination character. A fairly trivial modification to using the SV as it comes in would fix that however.

Replies are listed 'Best First'.
Re: Re: Efficient run determination.
by PhiRatE (Monk) on Nov 15, 2002 at 09:20 UTC
    Well, I ran some benchmarks myself, basically because I wanted validation :) I like to think I'm not biased but that's probably untrue :)

    For 1000 iterations over the given test string, the time to complete is given in seconds. Some of the solutions provided were modified slightly so that they were contained in a function, and returned an AoA containing the results.

    PhiRatE 1:  0.042767s
    Dingus 2:   0.083538s
    Dingus 1:   0.106047s
    TommyW 1:   0.117478s
    Enlil 2:    0.138005s
    Robartes 1: 0.217086s
    Rasta 1:    0.273936s
    

    The Inline C function I proposed is nearly twice as fast as the nearest all-perl competitor (Dingus' 2nd attempt). I must admit its not as much of an advantage as I had expected, but still clear. A run of 10000 each revealed almost exactly the same ratios.

    Please note that no attempt to validate the correctness of any of the entries was made.

    For further interest I did a test with a considerably longer string (1920 chars):

    PhiRatE:     2.238917s
    Dingus 1:    5.350828s
    Dingus 2:    5.474964s
    Enlil 2:     9.461141s
    TommyW 1:    8.870724s
    Robartes 1: 10.239816s
    Rasta 1:    19.996283s
    

    The inline method is even further ahead in this test. Interestingly Dingus 2, which had previously performed second-best, dropped to 3rd. Possibly due to the different nature of the data or some non O(1) operations within the calls made. Others held up variously against the increased length. The operation was fairly memory intensive but I ran all the competitors in various orders to ensure that it wasn't affecting anything.

      Nice one. Now send me a version that I don't have to compile:)

      Seriously, thanks for doing the benchmark. I am doing the same right now, but am having trouble ensuring that I'm comparing apples with apples. I have one (perl) solution that hasn't been posted yet which might do better, but it's very difficult to benchmark as it relies on @- to get the positions and lengths and this system var goes out of scope very quickly.

      Here is my simplistic test program run on a 1920 char string. One run only and a different string from yours so direct comparison is dodgey, but it finds 351 triplets in the 1920 chars string in a little under 0.4 of a second, which may mean it isn't doing so bad by comparison with your C version. Maybe you could include it into you benchmark (HARD!) or at least time one run of it using the same data as you did for your other tests for comparison.

      #! perl -slw use strict; use re 'eval'; #! Took me ages to find this. use Benchmark::Timer; my $t = new Benchmark::Timer; #! Set up big regex. 1-time hit. my $re ='(?:(.)(??{"$+*"}))?' x 500; $re = qr/$re/o; # Test data(+2) x 32 to make it more comparable $_ = ' aaaa bbbbccccccccbbbb aaaabbbbbcdddddddddddddddddddddd' x +32; #! All the data is gathered here in one shot. #! Chars in @c - start positions in @-, lengths derived: length N = $- +[N+2] - $-[N+1] #! Caveat. lengths must be used immediately (or stored elsewhere which + costs) else the go stale:) $t->start('bigre'); my @c = m/$re/; #! THIS LINE DOES ALL THE WORK. #! This truncates the list to exclude null matches returned from regex +. $#c = $#- -1; $t->stop('bigre'); #! data accessed here. printf "('%1s', %3d, %3d)\n", $c[$_], $-[$_+1], ( $-[$_+2] || $+[0] ) +- $-[$_+1] for 0..$#c; print $#c, ' triplets found in '; $t->report; __END__ C:\test>212796-2 (' ', 0, 3) ('a', 3, 4) (' ', 7, 3) .... 345 values omitted .... ('b', 1892, 5) ('c', 1897, 1) ('d', 1898, 22) 351 triplets found in 1 trial of bigre (371ms total) C:\test>

      The basic idea was to push the loop into the regex engine instead of external to it, and use the regex engines well tuned optimisations to do the data gathering for me. I originally though that using the /g modifier would do the trick, but it only returns the positions for the last match made. Then I thought of using a repeated regex ({1..n}) but again this is treated such that only the positions for the last capture group are available.

      The answer was to construct a single large regex with enough capture groups to gather the data I needed. This was made harder by the need to only capture the first char of the group, but also to know its end position. That's when I discovered the (??{}) re and combined this with $+ in order to achieve my aims.

      I'm not sure that this is good or maintainable Perl, but it seems to work and appears on preliminary tests to be very fast, which was and is the only criteria by which I am judging it.


      Okay you lot, get your wings on the left, halos on the right. It's one size fits all, and "No!", you can't have a different color.
      Pick up your cloud down the end and "Yes" if you get allocated a grey one they are a bit damp under foot, but someone has to get them.
      Get used to the wings fast cos its an 8 hour day...unless the Govenor calls for a cyclone or hurricane, in which case 16 hour shifts are mandatory.
      Just be grateful that you arrived just as the tornado season finished. Them buggers are real work.

        As I noted, C is the language for the job. It doesn't really matter how efficient the regex solution, C can work on the data in-place without incurring much in the way of dynamic allocation costs. You have mitigated this disadvantage to an extent in your version by placing certain limitations on the algorithm, notably the 500 unit capture max. This changes the situation considerably. However the design you chose isn't competitive unfortunately :/ primarily I suspect because its not an optimised line within the re engine to be used like this, although that is conjecture sicne i'm not familiar with the internals.

        Length: 1920
        Enlil 2:    0.93203s
        Dingus 1:   0.530455s
        Dingus 2:   0.537765s
        Rasta 1:    1.973259s
        TommyW 1:   0.87257s
        Robartes 1: 0.996671s
        PhiRatE:    0.232084s
        BrowserUk:  2.764623s
        

        edit: note that this is for 100 iterations

        With the code done like this:

        # BrowserUk $t0 = [gettimeofday]; #! Set up big regex. 1-time hit. my $re ='(?:(.)(??{"$+*"}))?' x 500; $re = qr/$re/o; for (1..100) { @res = browseruk($stn); } print "BrowserUk: ".tv_interval( $t0 )."\n"; sub browseruk { $_ = shift; my @c = m/$re/; #! THIS LINE DOES ALL THE WORK. #! This truncates the list to exclude null matches returned from r +egex. $#c = $#- -1; return \@c; }

        I anticipated that perhaps the startup cost of the regex generation might be causing the performance problem, so I ran a 1000 unit test as well, against only my entry.

        Length: 1920
        PhiRatE:    2.241276s
        BrowserUk: 28.384692s
        

        Note that you got similar results, I'm running 100-1000 runs of the same line, you only did one :)

        My recommendation is either go with the Inline C one if you really need the speed, or Dingus' 2nd variant, which is the cleanest, closest perl variant.