That makes sense, although it would be an interesting exercise to compare both algorithms, written in C, with a good optimising compiler (Intel's on x86 for example).
With ubiquitous FP processors on-board modern cpus, the difference in costs between FP and integer operations has narrowed considerably, esp. when pipelining can be used to good advantage.
I've read some articles that make the case for dropping the distinctions between integer, float, & double in programming languages and just using the FP processors native size (80-bit on Intel) for all program-level numerical quantities. The slight drop in performance for heavy integer math can be more than compensated for, by removing all the decision points--what type of number is this? Does it need to be extended? Will it/did it overflow? etc.
Perl threw the float away years ago, why not bin the (internal) integers as well, and make full use of the hardwares FP precision saving all the conversions that take place converting between 64-bit doubles and 80-bit internals.
Makes perfect sense to me.
In reply to Re^7: minimum, maximum and average of a list of numbers at the same time
by BrowserUk
in thread minimum, maximum and average of a list of numbers at the same time
by LucaPette
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |