in reply to Re^5: bigint == horrible performance?
in thread bigint == horrible performance?

The slowdown comes in (at least) three stages. Perhaps more depending upon which back-end is being used behind the bigint front of house.

  1. Overloading the operators adds one level of subroutine call to each math op.
  2. The call into the back-end library adds a second.
  3. A third level comes from getting access to the memory in which the numbers are actually stored.

    With native opcodes, the value is at a fixed offset and the actual operations are essentially a single assembler instruction (processor opcode).

    With the libraries, you have the XS to C wrapping layer, then the library call, then dereferenceing and casting to get access to the number hanging off the PV.

    And with arbitrary precision and larger-than-register fixed precision, you have the condition testing to see if the number has multiple elements and the loops needed to propagate carries etc.

I was not laying any criticisms. If you need the precision, then the costs -- whatever they are -- simply have to be paid. But using bigint 'just in case', or as a 'cure' for perceived 'inaccuracies' in floating point is the wrong way to go.


With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.