in reply to Re: Internal Rate of Return
in thread Internal Rate of Return

Thanks tilly.

I was surprised that ACM/493 is quite short (only 721 lines of Fortran). I'll have a go at translating it.

I am assuming that precision will significantly affect stability/convergence. The article you referred to has this advice:

The use of a robust numerical program, based upon sound theory and an excellent algorithm, and coded to thoroughly deal with computer round-off errors, is the recommended action.

I am concerned that the last bit of their requirement will be difficult to satisfy. I haven't had any reason to deal with such issues since University. It will be challenging and interesting to see what happens.

Replies are listed 'Best First'.
Re^3: Internal Rate of Return
by tilly (Archbishop) on May 20, 2009 at 00:45 UTC
    Try to do translation in pieces, and expect to spend time running the Fortran in a debugger as you're tracking down your mistakes.

    The reason for my suggesting real arithmetic only is that you can then use Math::BigFloat, which allows you to do arbitrary precision floating point arithmetic. This allows you to "dial up" the precision of your calculations until you find a point at which the factors multiply out to an acceptable tolerance, and increasing the precision doesn't change the answer significantly. If both of these statements are true, then you've got extremely good evidence that you have, indeed, dealt with the precision issues and found the roots very accurately.

    Note that neither condition by itself is sufficient.

    If your polynomial has a repeated root, then numerical solutions with multiple roots that are somewhat close to that repeated roots can multiply out and be very, very close to your polynomial, even though the root itself is fairly far off. For instance suppose the polynomial is (x-1)4, but numerically you came up with roots of 0.99, 1.01, 1+0.01i, 1-0.01i. When you multiply it out you've got the right coefficients to 10-8 but the roots are off by 0.01! However if you redo the calculation at a higher precision, you should notice the roots moving around a lot.

    If you're running your algorithm and it looks like it is providing numbers, it would be really, really easy for you to be simply producing wrong numbers due to a bug you didn't track down. Multiplying it out is an excellent sanity check.