Recently I came to know about this wonderful module called Quantum::Superpositions, written by Damian Conway.
The concept it is built around is really enlightening (in spite of a few problems in the doc tackled some time ago in Quantum::Superpositions does not work as advertised), and it reminded me such things as Prolog: "damn, this is high-level programming".

Then I read the various examples given as possible uses. The first one said:

Thus, these semantics provide a mechanism to conduct parallel searches for minima and maxima :
sub min { eigenstates( any(@_) <= all(@_) ) }
The so-called 'parallel search' here is a reference to an hypothetic quantic computer. So this search is not parallel. And not only is it sequential (as everything else on classic single-processor PCs to this day), but it is damn slow!

So let's see what happens inside Damian's code (which is quite beautiful, but that's another problem) (his code, not Damian) (well, I never met him so I couldn't tell) when we look for the minimum of, say, (3,1,2) using any(3,1,2) <= all(3,1,2):
The overloaded operator calls the function qblop (nice name for a function :): q{<=}   =>  sub { qblop(swap(@_), sub { $_[0] <= $_[1]  })} which consists in:
multimethod qblop => ( Quantum::Superpositions::Disj, Quantum::Superpo +sitions::Conj, CODE ) => sub { &debug; return any() unless @{$_[0]} && @{$_[1]}; my @dstates = @{$_[0]}; my @cstates = @{$_[1]}; my @dokay = (0) x @dstates; foreach my $cstate ( @cstates ) # $cstate in (1,2,3) { my $matched; foreach my $d ( 0..$#dstates ) # $d in (0..2) { $matched = ++$dokay[$d] if istrue(qblop($dstates[$d], $cstate, $_[ +2])); } return any() unless $matched; } return any @dstates[grep { $dokay[$_] == @cstates } (0..$#dsta +tes)]; };
So this function yields the right answer (any(1)) after 9 calls to istrue().
More generally, it will take a constant number of n*n comparisons (the istrue() finally resolves to a comparison between numbers) to determine the minimum of a list of n elements using this approach.
The simple method, in one pass over the list, would have costed you only n comparisons. For instance:
$min = $list[0]; for my $i (1..$#list) { $min = $list[$i] if ($min > $ +list[$i]);};
(I do know it's possible to make it less verbose, but my point is precisely that being concise should not prevent you from knowing what's really happening under the hood.)

So I thought, "still this module is cool, who cares about bad examples?". But the following part made me fall off my chair:

The power of programming with scalar superpositions is perhaps best seen by returning the quantum computing's favourite adversary: prime numbers. Here, for example is an O(1) prime-number tester, based on naive trial division:
sub is_prime { my ($n) = @_; return $n % all(2..sqrt($n)+1) != 0 }
What makes me eat my mouse here is the <it>O(1)</it>. This just means that when the number to test becomes high, the testing time tends to a constant, independant of $n.
I know that if I had seen this a few years ago, I would have thought that Damian was a genius (which I'm not saying he isn't) and that I would only use his method.
Now this is dangerous. No doubt that the writing and debugging time is in O(1); but the running time is still as big as if you'd explicitly write the underlying for loop. The number of integer divisions is in O(sqrt(n)) in the worst case. And that's abolutely not clear in Quantum::Superpositions's doc.

So my conclusion is: TMTOWTDI, fine. But still I like to choose the best way for me. People always advertise about their method being the coolest, but rarely about it being the heaviest.

Of course, I am not insinuating that Damian does not know how fast his fantastic module is for such operations as computing a minimum or finding prime numbers; and I know that he knows that the <it>parallel processing</it> he's writing about is purely hypothetic. I'm saying that for one monk without the appropriate theoretical programming background, these points are not obvious, and could be misleading. After all, since Larry Wall himself tells us about <it>magic</it> in Perl, why should we question the impossible?
I am suggesting that maybe, either his examples could be better chosen, or it could be explicitly stated that they are not meant to be efficient in terms of execution time.

I wish also to extend this remark to many other modules (Quantum::Entanglement being excluded), for not stating in their doc how fast they really are. Being cool is nice when I post on perlmonks, 'cause, well, that makes me look cool. But for things I don't show to everyone, I really prefer listening to my Impatience, finding out what my code really does, and being fast.

So what about you?

Blop

Replies are listed 'Best First'.
Re: about Coolness, Impatience and Complexity
by Masem (Monsignor) on Jul 12, 2001 at 17:41 UTC
    I think you need to remember that if the Quantum::Superposition modules were run on a true quantum computer, in which a 'bit' can hold multiple state information truely simulatenously, then the any() and all() functions operate at O(1) time (instead of O(n) time for a typical '1 bit == 1 state' PC), and all other variations on that based on any() or all() reflect this. The prime number calculator, for example, is an O(n) operation on any PC today; but if a quantum computer which can return all() in O(1) time existed, this is a O(1) method.

    Sure, we don't have a true quantum computer yet, but I think the interesting thing with Q::S and the other such classes is that it makes some problems much more interesting to think about, possibly offering insight into how to approach a problem. The running time of the solution may not be as efficient as it might seem, and actually may be worse than the normal, '1 value per bit' PC, but if/when we have a quantum computer, that new solution could become invaluable.


    Dr. Michael K. Neylon - mneylon-pm@masemware.com || "You've left the lens cap of your mind on again, Pinky" - The Brain
Re: about Coolness, Impatience and Complexity
by Abigail (Deacon) on Jul 12, 2001 at 18:34 UTC
    I doubt I've ever seen someone so completely miss the point of a module. Q::S isn't about speed, it's about showing the power of quantum operations, the any and all functions. It's a simulation of a machine. And yes, the primality tester is O(1). It's just using different units than you are used to. Note that all big-Oh run time complexities are in numbers of elementary operations. For instance, in computational geometry deciding whether a point lies right or left of a line is an O(1) operation, regardless how many bits we need to express the coordinates of the points when we would actually simulate such operations on a current day device. In the world of Q::S we have different elementary operations.

    Finally it should be pointed out that Q::S was mainly intended as a joke. Most people got it - but some didn't.

    -- Abigail

      I recognize this module as an interesting insight into a possible evolution of computers. I do understand that it describes methods that, if implemented in hardware with the appropriate technics, would:
      1. be a great step forward (precisely for the parallel computing Damian underlines it would allow);
      2. imply that we consider the writing of algorithms differently from what we do now with our 'sequential' machines.

      I live in a real world, work with a real PC, and have seen real people using this module. Its documentation does not say "this is a joke", and it is really available on CPAN.

      If it was intended only as a joke, I had no way to know it, as its documentation doesn't say so. Moreover, for the reasons stated above, and for the fact that it provides a concise, expressive, elegant way of doing certain things (including computing minimums if you are not in a hurry) I do not think it should be considered <it>mainly</it> as a joke.

      But I probably put too much focus on Q::S. I mainly intended it as an example to illustrate one of the possible shortcomings of <it>"concise, expressive, elegant"</it> ways of doing things: people misunderstanding how much calculus there is behind a simple, innocent, pretty expression.

      Concerning the particular complexity of this prime number tester, I have to agree with you, Abigail. <it>It's just using different units than I am used to.</it> The problem is that the units are not specified. I did understand that Damian was expressing his complexity in terms of elementary quantic operations. But please admit that in everyday programming, what interests most of us is complexity in terms of time (or space, sometimes). This O(1) is pure prospective and does not mean much for the pragmatic programmer of today.

      For your specific example of the point and the line, I agree too: I need, basically, 2 multiplications, one substraction, and one comparison. So it runs in constant time on everyday PCs (no matter how long the line is :).
      Still, I can say it is an operation in O(n) (time) if I consider my little brain as the target platform (the larger the number, the longer it takes me to compute multiplications). But if I say so, I will definitely not forget to specify the unit (time) and the platform (you servant), for they are not what people are accustomed to.

      (Maybe I should try and be more concise myself.) BTW, you seem to like polemics, Abigail, and I like that too :).

      Blop!
        Sometimes, and this is on a case by case basis, simplication, abstraction, or any other way to reduce code may be simplier to maintain, document, and explain to others than code that might run faster but is overall a bit harder to read.

        For example, given hypothetical:

        my @d = 2 * all( @values ); # Method 1, assume 'use Q::S' my @d = map { 2*$_ } @values; # Method 2
        (and include method 3, which would be the functional/Haskill approach to this problem, which I know you can write, but I have no idea how to write it myself :-).

        Method 2 is the fastest of all these, but to those not versed in perl, it may be harder to describe than method 1, which 'reads' like what it's supposed to do. Depending on the situation I was in when coding this, I might purposely using the Q::S method if only to convey meaning moreso than method 2; that situation would most likely be limited to scientific groups with understanding of computing but no strong computer language skills. On the other hand, if I was with a web site design shop, I'd automatcially advocate the use of method 2, which is straight forward from (what we hope to expect) the viewpoint of experienced computer programmers.


        Dr. Michael K. Neylon - mneylon-pm@masemware.com || "You've left the lens cap of your mind on again, Pinky" - The Brain
      Note that all big-Oh run time complexities are in numbers of elementary operations.
      If you really want to get down to the brass tacks, a computer scientist is going to define "elementary operations" in terms of a Turing machine (or something equivalent). Since a Turing machine has a finite number of states and symbols available, it can only push a finite number of bits around at once. Multiplying two n-bit numbers together is an O(n**2) operation using the algorithm they taught you in grade school. It can be done in O(n*log(n)) with FFT techniques.
        Having been a computer scientists, I strongly disagree with the statement that a Turing machine should be involved. Not at all! We use elementary operations in a certain computational model. That could be a Turing machine, but it could also mean a pointer machine, a Von Neumann architecture or something more abstract as arithmetic or geometry. Complexity analysis for calculations on a modern computer certainly don't assume a Turing machine as the underlaying model. You wouldn't be able to do binary search in O (log N) time on a Turing machine. You cannot achieve O (log N) on a Turing machine because indexing in an array cannot be done in constant time on a Turing machine - it's going to be at least linear in the amount of tape used.

        -- Abigail

      Cool, I never actually realized that about Q::S, having only read bits of code using it and not actually reading much about it.

      So this means sorting could be an O(N) operation (not necessarily using Q::S, but maybe Quantum::Sort??).

Re: about Coolness, Impatience and Complexity
by John M. Dlugosz (Monsignor) on Jul 13, 2001 at 00:50 UTC
    Higher-level programming, such as any and all, could be implemented in a language without being so slow. For example, there are Linear Constraint Programming frameworks that work well. Trilogy blew away Prolog for such things.

    Also, such a simple use might be akin to using a while loop to sort. Would a bubble sort be a good example of how to use these high-level "structured" flow mechanims 35 years ago when they were new?

    Such a general implementation is n-squared because it knows nothing about the CODE block. If it knew that the function was transitive, it could be O(n).

    Hey, that's a good use for "properties". Tag a function with such things as transitive and purity (the same input always generates the same output w/no side effects), and the caller can optomize.

    —John