The number of significant digits depends on the calculations you're doing with a set of input numbers. Essentially you'll have to keep track of the errors (relative and/or absolute) that accumulate during a calculation.
Two simple examples:
- 5.02 (+/- 0.01) + 3.1 (+/- 0.1) = 5.1 (+/- 0.2)
- 5.02 (+/- 0.01, 0.2%) * 3.1 (+/- 0.1, 4%) = 15.6 (+/- 0.8, 5%)
where the (+/- 0.01) is the absolute error and the 0.5% the relative error (expressed in %). Note that the sum is given as 5.1 rather than 5.12 since with an error of 0.2 there's no point showing that many decimals (they're not reliable). Similar for the product that is 15.562, but again, the last two decimals can't be trusted given an error margin of 5%.
- The absolute error of the sum of two numbers is the sum of the absolute errors of the terms.
- The relative error of the product of two numbers is the sum of the relative errors of the factors.
These two simple rules allow a complete analysis for all simple cases. Note that this implies that errors propagate, i.e. the number of significant digits can never increase. For cases involving mathematical functions such as the sqrt or trigoniometic functions, the platform specific docs should be consulted (or the appropriate IEEE specs on numbers).
You'll find a treatment of these concepts in any good book on numerical methods, Numerical Recipes in C is available online and not too bad. (Specifically, check out this chapter
Hope this helps, -gjb-
Update: Excellent pointer by davis in the node below.