Um, no. I am not sure at all, hence the disclaimer.
I should have known that my 'reverse the magic', AOK optimisation idea was so obvious that it would already have have been implemented long ago.
I did attempt to verify that bit, but I won't explain (my stupidity) that made me think I had confirmed it.
Suffice to say, even without the need to re-ascii-ify the numbers between math ops, just the process of addressing the values held in Perl arrays is costly. This is the cost of the infinite flexibility with which they can be grown, shrunk, used to hold any type (small t) of data, sliced, spliced, diced; created and thrown away with relative impunity. For most applications, this flexibility far outweights the costs, but understanding the costs in the key to knowing when they are inappropriate for a given purpose. Math intensive manipulations of large numerical datasets is one such case.
As C arrays are simply contiguous lumps of memory, looping over numeric arrays in C, involves incrementing a pointer by a fixed integer value, and loading a one or two (64- or 32-bit) registers via the pointer, performing the fp-op and then writing the result back via the same pointer.
A very tight, register bound process. Even if two arrays are being processed in parralel, it's still quite likely that all the imtermediate terms and pointers can be kept in registers on most processors.
The equivalent process using Perl's arrays is considerably more involved, requiring lots of pointer chasing, flag waving and (potentially) exspensive format-conversion. This is neither a surprise, nor a burden in the majority of programs, but knowing that this is what is involved is key to understanding what things perl is really good at doing and what things it is less-than-optimal for.
In reply to Re: Re^4: Confirming what we already knew
by BrowserUk
in thread Confirming what we already knew
by AssFace
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |