in reply to Re^7: If I am tied to a db and I join a thread, program chrashes
in thread If I am tied to a db and I join a thread, program chrashes

The term $count / ($N * $N * (time - $t)) is how many multiplications (or matrix cells) per second the benchmark can process.

I thought that at first, but then I looked closer.

while( my $r = $qr->dequeue ) { ++$count; if ($count == 63) { $t = time; }
  1. The count starts immediately, but the timer doesn't start until the count hits 63.

    At the very least that inflates the benchmark values a little.

  2. But then, it only checks the time each time the count hits a multiple of 64:
    elsif (($count & 63) == 0) { if (time > $t + 5) { printf "%f\n", $count / ($N * $N * (time - $t)); last; } } }

    Why would it do that? And the answer is, because it improves the performance of Coro!

    As Coro threads are cooperative, if the timing thread called the relatively expensive built-in time each iteration, the cpu used to process that opcode, would directly detract from the time the other threads spent doing multiplications.

    That operation effectively reduces the number of expensive calls to time by a factor of 64. As iThreads are preemptive, they don't need or benefit from the reduction in the number of calls to time, making it a performance multiplier for the Coro threads only. Talk about weighting the die.

I don't know what the copying is about. It makes sense to unshare data if you do a lot of operations on it because shared data is slower, but one multiplication isn't a lot. Maybe this was included to simulate some optimizations.

Hm. Sorry, but that doesn't make any sense at all.

To multiply the 50 pairs of values, means accessing each shared value once. 100 shared accesses. To copy those values to non-shared memory means accessing each shared value once to copy them to non-shared memory--100 shared accesses. But additionally, you have to: a) allocate the non-shared arrays; b) copy the values to that non-shared memory; c) access the non-shared values once each to do the multiply.

Ditto with the results. Instead of just writing them directly to shared memory, he 1) allocates non-shared; 2) writes to non-shared; 3) allocates shared; 4) copies non-shared to shared.

So, to avoid 100 'slow operations'; he does: 3*100 allocations; 100 read from shared (the 'slow operations' he was trying to avoid!); 100 writes to non-shared; 200 reads from non-shared; 100 writes to shared. Not so much of an optimisation.

And note. All of these shenanigans only happens on the iThreads side of the benchmark.

Cleaning up of the queues is not necessary because (again) this isn't a real solution to a problem but a synthetic benchmark.

But that (again) is a totally Coro-biased view of things.

With 4 (preemptive) threads continually generating matrices--breaking them up into chunks (that will never be processed); sharing them and firing them into a shared queue (injecting sync points into all the threads that have visibility of that queue; ie. all of them)--they are directly competing for the CPUs with the 4 threads that are meant to be doing the work.

Continually adding more and more data to the queue that's never going to be processed, means constantly reallocating and copying the underlying shared array, as the queue size doubles and redoubles in size. It's like timing how long it takes to stick labels on boxes whilst the operator is continuously having to unload and reload the boxes he's already done onto bigger and bigger lorries. The time taken to affix the labels is entirely lost in the noise of everything else he is having to do.

And again, the bias is all in favour of Coro, because with limited size queues the Coro generating threads will have blocked long before he ever starts timing. Ie. 63 * 50 = 3,150 > 512

The only result anyone is interested in is the time it takes.

In that case, I offer:

perl -E"$s=time(); $i=0; ++$c, rand()*rand() while time() < $s+5; say +$c/5" 3210038.8

Let's see Coro compete with that! It's just as (un)fair a comparison as this benchmark.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."

Replies are listed 'Best First'.
Re^9: If I am tied to a db and I join a thread, program chrashes
by marioroy (Prior) on Feb 18, 2013 at 23:31 UTC

    Many-core Engine for Perl (MCE) comes with several examples demonstrating matrix multiplication in parallel across many cores. The readme contains benchmark results as well.

Re^9: If I am tied to a db and I join a thread, program chrashes
by jethro (Monsignor) on Jun 11, 2009 at 12:33 UTC

    Ah, I didn't see that with the count not subtracting 64. So another bug. Checking time only every 64th round is legitimate though as benchmarks should exclude any overhead of the measurements. As the ithreads version has no disadvantage from that and the Coro measurements are more exact, this is not weighting the die

    I don't know what the copying is about. It generally makes sense to unshare data...

    Sorry, my english has bugs as well. I think you misunderstood what I was saying here. I hope the sentence is more understandable now with the added word "generally". That his use of it in this benchmark is massively weighting the die is without question

    As I was saying in the last post (maybe not clear enough), this script can only work as a Coro benchmark. I'm not arguing that the ithreads side of that code has any merit (I didn't even look at it when I was inspecting the code). But apart from the bug with the time measurement the Coro side of the benchmark seems to be a valid benchmark. And on that side the design decisions of the writer make sense (to me at least). And I suspect that Marc Lehmann first had the (sensible) coro version and then added an ithread version without taking into account that a direct translation to ithreads makes no sense. Whether he did that on purpose, who knows? It is at least incredibly sloppy or stupid if it wasn't on purpose. That he put the benchmark on the net might indicate the former

    ...means constantly reallocating and copying the underlying shared array...

    I thought with "cleaning up the queues" you meant processing the rest of the queue after the last time measurement was done. Now your point makes more sense

      But apart from the bug with the time measurement the Coro side of the benchmark seems to be a valid benchmark.

      I don't wish to press the point, though I suppose I am by even mentioning it, but I'm not sure it makes much sense even as a standalone benchmark of Coro. I'll explain why, but don't feel the need to respond.

      What exactly is it benchmarking?

      • Given the output number: "multiplications ... per second", you might say multiplication...

        but of course it's Perl that's doing the multiplication, and if you take Coro out of the picture, Perl alone wins hands down.

      • And if the test is how quickly Coro can switch between its 'threads', with the multiplications as just the metric indicating how much (or little) time penalty Coro thread-switching imposes...

        Why not just set four threads running doing multiplies in a loop, and cedeing every N?

      • And if the purpose is to test the efficiency of Coro queues...

        Why bother with all the multiplying?

      I really cannot see the merit of the benchmark, as either a comparative study of Coro and iThreads, nor as a standalone test of Coro itself.

      One thing is for sure, if this is the basis of the POD claim: "A parallel matrix multiplication benchmark runs over 300 times faster on a single core than perl's pseudo-threads on a quad core using all four cores.", then quite frankly, he should be prosecuted by the Statistics Police :)

      And the sentence: Unlike the so-called "Perl threads" (which are not actually real threads but only the windows process emulation ported to unix, and as such act as processes), is a candidate for the I-know-what-you-were-trying-to-say-but-that-isn't-it of The Year award. :)

      I'll continue to endeavour to get Coro to build on my system, and if I succeed, I'll attempt to produce a fair comparison of matrix multiplication using both. Within the limitations of Coro, I believe that it would still show Coro in a good light. threads::shared memory is horribly and unnecessarily slow. I wish I could see how to address that. But the claim above is frankly ludicrous.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

        I tried to resist but eventually I lost my resolve and responded ;-)

        What exactly is it benchmarking?

        My answer would be matrix multiplication. Ok, his test matrices are too small (which makes it a worst case benchmark). But in the proceedings of the perl workshop there is a diagram where matrix multiplications/s (not simply multiplications/s) are compared to the matrix size. The diagram shows that he tested variable matrix sizes, up to 1000x1000 matrices, and also used a different benchmark metric. PS: I found the diagram on the same server where the test script is, http://data.plan9.de/mat.png

        Naturally the coro-version is slower than pure perl. But the interesting thing is how much slower. Threads allow different programming styles or paradigms, for example producer/consumer relationships. How much is the penalty to do it this way instead of the simple iterative way?

        ...I'll attempt to produce a fair comparison...

        I'm anxious to hear those results. I even might show Marc Lehmann the results at the next perl workshop, if he is there.