The slowdown comes in (at least) three stages. Perhaps more depending upon which back-end is being used behind the bigint front of house.
With native opcodes, the value is at a fixed offset and the actual operations are essentially a single assembler instruction (processor opcode).
With the libraries, you have the XS to C wrapping layer, then the library call, then dereferenceing and casting to get access to the number hanging off the PV.
And with arbitrary precision and larger-than-register fixed precision, you have the condition testing to see if the number has multiple elements and the loops needed to propagate carries etc.
I was not laying any criticisms. If you need the precision, then the costs -- whatever they are -- simply have to be paid. But using bigint 'just in case', or as a 'cure' for perceived 'inaccuracies' in floating point is the wrong way to go.
In reply to Re^6: bigint == horrible performance?
by BrowserUk
in thread bigint == horrible performance?
by moodywoody
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |