But as I was playing with some code in Snippets (file/language entropy calculator), I was surprised to find that the precision limits seem to vary from one run to the next -- or rather, the behavior of floating-point arithmetic beyond the precision limit was not constant, as I would have expected.
Both the OP's code in that Snippet thread and my trimmed-down version of same seem to produce different values on successive runs over the same input text file. Here are a few sample output values from repeating the same computation on the same input (this is running on a G4 powerbook with macosx 10.3.6, Darwin kernel version 7.6.0, perl v5.8.1-RC3 built for darwin-thread-multi-2level):
and if I run it more times, I get more distinct values.5.35423847163199795318 5.35423847163199884136 5.35423847163199972954 5.35423847163200150590 5.35423847163200239407
Now, it's obvious that no one should be paying attention to more than 12 or 13 digits in terms of floating-point accuracy (so wufnik should adjust his ideas about "printf" formatting), but I'm wondering whether this variability is observed on other systems.
I did take the time to check the same code and data on solaris/sparc (perl 5.5.3) and freebsd (i386 and amd64, perl 5.8.5) -- none of them showed variable behavior like I saw on the mac (solaris/sparc always came up with one answer, which happened to match one of the answers on the mac; the i386 and amd64 freebsd's both came with a consistent single answer every time, which differed from the sparc answer and matched another of the mac's results).
So it smells like a mac/G4 issue, rather than a perl issue. But it struck me as noteworthy, and I'd be interested in others' reactions.
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |