You said in another post
I'd like it to go as fast as possible and not restrict myself to a certain size, if possible
How critical is the need for speed? And how far are you prepeared to go to get it? Also, what platform are you on?
Your post made me revisit one of my back-boiler projects from a couple of years ago when I was playing with really big numbers and was disheartened by the lackadaisical performance of Math::Big*. From my assembler days, I remembered that many processors including the entire x86 range support BCD (unpacked) math opcodes and wondered how doing math with them would compare. The assembler was easy, but XS was a complete mystery at that point and I abandoned it for future attention, but not before proving that it ran rather faster than anything else I had tried from Perl.
The following only implements increment (Add with a constant of 1); will only work if you are set up for Inline::C using an MS compiler, but it shows rather dramatically how fast a full library could be. The assembler should be readily convertible to other compiler formats running on x86 so conversion to run on many Linux platforms shouldn't be onerous.
The difference in performance between using the BCD opcodes and Perl's string increment is so dramatic that I had to scale the work done by the BCD by a factor of 1000:1. Without this weighting factor, by the time the iteration count was high enough for the Benchmark module to stop complaining about too few iterations for the BCD version, the runtime for the magic increment version was so long (days!), that it was longer than I was prepared to wait. So, when the benchmark shows MAGIC as being 320% faster than BCD, the reality is that the BCD code runs 26700% faster. And when it says the BCD code is 2273% faster, it is really 2273000%!
Yes, I know those number seem ridiculous, but I've been sitting here for a couple of hours running over them, and I ran another benchmark with the internal multipliers set to the same value (1..1e4), and the results produced seem to bear out my math:
c:\test>mathbcd
Comparing incrementing 100_000 digit numbers; 10,000 iterations.
(warning: too few iterations for a reliable count)
s/iter magic bcd
magic 1.66 -- -100%
bcd 6.20e-003 26740% --
1000000000 ... 0000099999
1000000000 ... 0000099999
Comparing incrementing 1_000_000 digit numbers; 10,000 iterations
(warning: too few iterations for a reliable count)
s/iter magic bcd
magic 17.4 -- -100%
bcd 9.40e-003 185072% --
1000000000 ... 0000099999
1000000000 ... 0000099999
Comparing incrementing 10_000_000 digit numbers; 10,000 iterations
(warning: too few iterations for a reliable count)
** The BCD finishes in a second or so, but I gave up waiting for the
magic to finish after it had thrashed my machine for more than an hour
+ **
If you doubt the results, please run your own tests :)
Now I've got a handle on the XS memory shuffling, I may have a go at implementing the full + - / * and see how they compare. Anyone interested (particlarly in writing a Linux version)?
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|