in reply to Internal Rate of Return

Be warned, there is no method of finding roots for polynomials with orders up to a few hundred that won't have some trouble with numerical stability.

You could try Math::Polynomial::Solve. Unfortunately it will usually require you to use Math::Complex, which means that you can't do arbitrary precision arithmetic. This may be a problem if accuracy is an issue. I'm going to guess that it is.

Googling around another promising approach is outlined in http://math-blog.com/2008/03/06/polynomial-root-finding-with-the-jenkins-traub-algorithm/. That link links to a JavaScript implementation at http://www.akiti.ca/PolyRootRe.html. What is nice about that solution is that it only uses real arithmetic, so you're able to use Math::BigFloat to address precision issues.

In your situation I would be inclined to try to translate the JavaScript solution into a Perl one. (I'd prefer that over the FORTRAN because a lot of the work in structuring the program has been done for you. Plus I find JavaScript easier to read.) I'd personally start with a semi-automated translation, and fix up issues by hand until I began getting the same results. If you don't know much about parsing, this won't work for you. Instead you could translate one routine at a time, and throw data at each to verify your translation before going on. (Expect to have a few sessions where you're getting drastically different results and have no idea why...)

Once you have a Perl translation then you can play around with Math::BigFloat to find the right precision to work at. I personally would aim for finding a precision which satisfied both that the solution multiplied out gives the original polynomial to some tolerance, and that doubling the precision doesn't change the calculated roots by more than a specified tolerance.

Be warned that this is likely to be a fair amount of work. But it will be less than trying to come up with something on your own, and it will be more likely to give correct answers.

Replies are listed 'Best First'.
Re^2: Internal Rate of Return
by ig (Vicar) on May 19, 2009 at 23:56 UTC

    Thanks tilly.

    I was surprised that ACM/493 is quite short (only 721 lines of Fortran). I'll have a go at translating it.

    I am assuming that precision will significantly affect stability/convergence. The article you referred to has this advice:

    The use of a robust numerical program, based upon sound theory and an excellent algorithm, and coded to thoroughly deal with computer round-off errors, is the recommended action.

    I am concerned that the last bit of their requirement will be difficult to satisfy. I haven't had any reason to deal with such issues since University. It will be challenging and interesting to see what happens.

      Try to do translation in pieces, and expect to spend time running the Fortran in a debugger as you're tracking down your mistakes.

      The reason for my suggesting real arithmetic only is that you can then use Math::BigFloat, which allows you to do arbitrary precision floating point arithmetic. This allows you to "dial up" the precision of your calculations until you find a point at which the factors multiply out to an acceptable tolerance, and increasing the precision doesn't change the answer significantly. If both of these statements are true, then you've got extremely good evidence that you have, indeed, dealt with the precision issues and found the roots very accurately.

      Note that neither condition by itself is sufficient.

      If your polynomial has a repeated root, then numerical solutions with multiple roots that are somewhat close to that repeated roots can multiply out and be very, very close to your polynomial, even though the root itself is fairly far off. For instance suppose the polynomial is (x-1)4, but numerically you came up with roots of 0.99, 1.01, 1+0.01i, 1-0.01i. When you multiply it out you've got the right coefficients to 10-8 but the roots are off by 0.01! However if you redo the calculation at a higher precision, you should notice the roots moving around a lot.

      If you're running your algorithm and it looks like it is providing numbers, it would be really, really easy for you to be simply producing wrong numbers due to a bug you didn't track down. Multiplying it out is an excellent sanity check.