Recently I came to know about this wonderful module called Quantum::Superpositions, written
by Damian Conway.
The concept it is built around is really enlightening (in spite of a few problems in the doc tackled
some time ago in Quantum::Superpositions does not work as advertised), and it reminded me such things as Prolog: "damn, this is high-level
programming".
Then I read the various examples given as possible uses. The first one said:
Thus, these semantics provide a mechanism to conduct parallel searches for minima and maxima :The so-called 'parallel search' here is a reference to an hypothetic quantic computer. So this search is not parallel. And not only is it sequential (as everything else on classic single-processor PCs to this day), but it is damn slow!sub min { eigenstates( any(@_) <= all(@_) ) }
So this function yields the right answer (any(1)) after 9 calls to istrue().multimethod qblop => ( Quantum::Superpositions::Disj, Quantum::Superpo +sitions::Conj, CODE ) => sub { &debug; return any() unless @{$_[0]} && @{$_[1]}; my @dstates = @{$_[0]}; my @cstates = @{$_[1]}; my @dokay = (0) x @dstates; foreach my $cstate ( @cstates ) # $cstate in (1,2,3) { my $matched; foreach my $d ( 0..$#dstates ) # $d in (0..2) { $matched = ++$dokay[$d] if istrue(qblop($dstates[$d], $cstate, $_[ +2])); } return any() unless $matched; } return any @dstates[grep { $dokay[$_] == @cstates } (0..$#dsta +tes)]; };
(I do know it's possible to make it less verbose, but my point is precisely that being concise should not prevent you from knowing what's really happening under the hood.)$min = $list[0]; for my $i (1..$#list) { $min = $list[$i] if ($min > $ +list[$i]);};
So I thought, "still this module is cool, who cares about bad examples?". But the following part made me fall off my chair:
The power of programming with scalar superpositions is perhaps best seen by returning the quantum computing's favourite adversary: prime numbers. Here, for example is an O(1) prime-number tester, based on naive trial division:What makes me eat my mouse here is the <it>O(1)</it>. This just means that when the number to test becomes high, the testing time tends to a constant, independant of $n.sub is_prime { my ($n) = @_; return $n % all(2..sqrt($n)+1) != 0 }
So my conclusion is: TMTOWTDI, fine. But still I like to choose the best way for me. People always advertise about their method being the coolest, but rarely about it being the heaviest.
Of course, I am not insinuating that Damian does not know how fast his fantastic module is for such
operations as computing a minimum or finding prime numbers; and I know that he knows that the
<it>parallel processing</it> he's writing about is purely hypothetic. I'm saying that for one monk
without the appropriate theoretical programming background, these points are not obvious, and
could be misleading. After all, since Larry Wall himself tells us about <it>magic</it> in Perl,
why should we question the impossible?
I am suggesting that maybe, either his examples could be better chosen, or it could be explicitly
stated that they are not meant to be efficient in terms of execution time.
I wish also to extend this remark to many other modules (Quantum::Entanglement being excluded), for not stating in their doc how fast they really are. Being cool is nice when I post on perlmonks, 'cause, well, that makes me look cool. But for things I don't show to everyone, I really prefer listening to my Impatience, finding out what my code really does, and being fast.
So what about you?
|
---|