oh i know this, i just think this is an interesting concept. I should have clarified; This would be fast if this is how the processor implemented it. I doubt perl would implement this algorithm so cleanly. Also, i originally wrote something like this up in C.
I guess the whole point is, i know $a * $b is faster than multiply($a, $b);, but who knows, if i have a subroutine:
sub mult2 {
my($a, $b) = @_;
my $product = 0;
$product += $a while ($b--);
return $product;
}
mine would be faster for larger numbers.. i think. i could always be wrong, its happened before :) I'll have to use Benchmark; to test it ;)
| [reply] [d/l] [select] |
That would be slower.
Much slower.
In fact I don't need to look at Benchmark to immediately
say that your algorithm is O($b), which is slow indeed.
Multiplication is slower than addition in a chip because
general multiplication is a complex operation. That said,
multiplication is implemented efficiently. Repeated
addition is much, much worse than the general routine
except for a small number of exceptions. (eg 0, 1, 2.)
| [reply] |
You meant mult2 is slower, as in O($b), right? cause thats what i thought.. The original algorithm shown i believe runs @ O(log2 $b). As you said, there are certain occasions when multiplication is not so slow, such as times 0, 1, or 2. Thats pretty much the pattern i found for this algorithm. I noticed, (this was originally in C, mind you, and was purely conjtectural) hey, instead of multiplying by 2, i can just shift the bits 1 to the left, to multiply by 4, just shift 2 bits to the left, etc.. i just noticed that using bitwise manipulators, i could use that speed of multiplying by powers of 2 (2n, shift n to the left), which are very fast operations, so that i would have to add numbers much less than with the latter subroutine. anyway, i have the feeling i'm beating a dead horse here, so i think i'll leave it at that for now :)
| [reply] [d/l] |