Cap'n Steve has asked for the wisdom of the Perl Monks concerning the following question:

I've been experimenting with really big numbers recently and Math::BigInt has been starting to get really slow. Since I'm only incrementing the number, I assume there's a faster way to do it. I've been looking at the bitwise operations, but I've never used them much and don't see a simple way to solve my problem. The rest of the program can safely treat the number as a string. Any ideas?
  • Comment on Incrementing a large number without Math::BigInt

Replies are listed 'Best First'.
Re: Incrementing a large number without Math::BigInt
by Zaxo (Archbishop) on Apr 26, 2006 at 02:21 UTC

    If you load Math::BigInt like so,

    use Math::BigInt lib => 'GMP';
    the libgmp C routines will be used if available. Much faster.

    After Compline,
    Zaxo

Re: Incrementing a large number without Math::BigInt
by GrandFather (Saint) on Apr 26, 2006 at 02:13 UTC

    How big is big, and how slow is slow. Is there a limit to how big the numbers are you wish to handle? How about some sample code and an indiaction of how long it takes to execute on your system and how much faster you require it to go?


    DWIM is Perl's answer to Gödel
      I'd like it to go as fast as possible and not restrict myself to a certain size, if possible. Doing some tests on my 400mhz laptop, it takes about 5 minutes to create a number with 10 million digits. I haven't tried incrementing it at all.
      use Math::BigInt; my $i = 10000000; while (1) { my $num = Math::BigInt->new('1' . '0' x $i); print $i . "\n"; $i++; }
      Update: As a side note, Math::BigInt has some really weird memory usage. It jumps around like crazy, but my first test used about 50 megabytes on average while the second uses about 160.

        You may be interested that:

        use strict; use warnings; my $i = 100000000; my $num = '9' x $i; $num++; print length ($num) . ' ' . substr $num, 0, 10;

        Prints:

        100000001 1000000000

        and executes in about 1 second.


        DWIM is Perl's answer to Gödel
Re: Incrementing a large number without Math::BigInt
by BrowserUk (Patriarch) on Apr 27, 2006 at 01:44 UTC

    You said in another post

    I'd like it to go as fast as possible and not restrict myself to a certain size, if possible

    How critical is the need for speed? And how far are you prepeared to go to get it? Also, what platform are you on?

    Your post made me revisit one of my back-boiler projects from a couple of years ago when I was playing with really big numbers and was disheartened by the lackadaisical performance of Math::Big*. From my assembler days, I remembered that many processors including the entire x86 range support BCD (unpacked) math opcodes and wondered how doing math with them would compare. The assembler was easy, but XS was a complete mystery at that point and I abandoned it for future attention, but not before proving that it ran rather faster than anything else I had tried from Perl.

    The following only implements increment (Add with a constant of 1); will only work if you are set up for Inline::C using an MS compiler, but it shows rather dramatically how fast a full library could be. The assembler should be readily convertible to other compiler formats running on x86 so conversion to run on many Linux platforms shouldn't be onerous.

    The difference in performance between using the BCD opcodes and Perl's string increment is so dramatic that I had to scale the work done by the BCD by a factor of 1000:1. Without this weighting factor, by the time the iteration count was high enough for the Benchmark module to stop complaining about too few iterations for the BCD version, the runtime for the magic increment version was so long (days!), that it was longer than I was prepared to wait. So, when the benchmark shows MAGIC as being 320% faster than BCD, the reality is that the BCD code runs 26700% faster. And when it says the BCD code is 2273% faster, it is really 2273000%!

    Yes, I know those number seem ridiculous, but I've been sitting here for a couple of hours running over them, and I ran another benchmark with the internal multipliers set to the same value (1..1e4), and the results produced seem to bear out my math:

    c:\test>mathbcd Comparing incrementing 100_000 digit numbers; 10,000 iterations. (warning: too few iterations for a reliable count) s/iter magic bcd magic 1.66 -- -100% bcd 6.20e-003 26740% -- 1000000000 ... 0000099999 1000000000 ... 0000099999 Comparing incrementing 1_000_000 digit numbers; 10,000 iterations (warning: too few iterations for a reliable count) s/iter magic bcd magic 17.4 -- -100% bcd 9.40e-003 185072% -- 1000000000 ... 0000099999 1000000000 ... 0000099999 Comparing incrementing 10_000_000 digit numbers; 10,000 iterations (warning: too few iterations for a reliable count) ** The BCD finishes in a second or so, but I gave up waiting for the magic to finish after it had thrashed my machine for more than an hour + **

    If you doubt the results, please run your own tests :)

    Now I've got a handle on the XS memory shuffling, I may have a go at implementing the full + - / * and see how they compare. Anyone interested (particlarly in writing a Linux version)?


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      I'm on Windows and right now there's really nothing concrete involved. Like you, it's just me experimenting.

      Your code is a little beyond me since I haven't even gotten around to learning C, but the results are definitely impressive. I might have to start messing around with that, too.