Re: Internal Rate of Return
by tilly (Archbishop) on May 19, 2009 at 12:43 UTC
|
Be warned, there is no method of finding roots for polynomials with orders up to a few hundred that won't have some trouble with numerical stability.
You could try Math::Polynomial::Solve. Unfortunately it will usually require you to use Math::Complex, which means that you can't do arbitrary precision arithmetic. This may be a problem if accuracy is an issue. I'm going to guess that it is.
Googling around another promising approach is outlined in http://math-blog.com/2008/03/06/polynomial-root-finding-with-the-jenkins-traub-algorithm/. That link links to a JavaScript implementation at http://www.akiti.ca/PolyRootRe.html. What is nice about that solution is that it only uses real arithmetic, so you're able to use Math::BigFloat to address precision issues.
In your situation I would be inclined to try to translate the JavaScript solution into a Perl one. (I'd prefer that over the FORTRAN because a lot of the work in structuring the program has been done for you. Plus I find JavaScript easier to read.) I'd personally start with a semi-automated translation, and fix up issues by hand until I began getting the same results. If you don't know much about parsing, this won't work for you. Instead you could translate one routine at a time, and throw data at each to verify your translation before going on. (Expect to have a few sessions where you're getting drastically different results and have no idea why...)
Once you have a Perl translation then you can play around with Math::BigFloat to find the right precision to work at. I personally would aim for finding a precision which satisfied both that the solution multiplied out gives the original polynomial to some tolerance, and that doubling the precision doesn't change the calculated roots by more than a specified tolerance.
Be warned that this is likely to be a fair amount of work. But it will be less than trying to come up with something on your own, and it will be more likely to give correct answers. | [reply] |
|
|
Thanks tilly.
I was surprised that ACM/493 is quite short (only 721 lines of Fortran). I'll have a go at translating it.
I am assuming that precision will significantly affect stability/convergence. The article you referred to has this advice:
The use of a robust numerical program, based upon sound theory and an excellent algorithm, and coded to thoroughly deal with computer round-off errors, is the recommended action.
I am concerned that the last bit of their requirement will be difficult to satisfy. I haven't had any reason to deal with such issues since University. It will be challenging and interesting to see what happens.
| [reply] |
|
|
Try to do translation in pieces, and expect to spend time running the Fortran in a debugger as you're tracking down your mistakes.
The reason for my suggesting real arithmetic only is that you can then use Math::BigFloat, which allows you to do arbitrary precision floating point arithmetic. This allows you to "dial up" the precision of your calculations until you find a point at which the factors multiply out to an acceptable tolerance, and increasing the precision doesn't change the answer significantly. If both of these statements are true, then you've got extremely good evidence that you have, indeed, dealt with the precision issues and found the roots very accurately.
Note that neither condition by itself is sufficient.
If your polynomial has a repeated root, then numerical solutions with multiple roots that are somewhat close to that repeated roots can multiply out and be very, very close to your polynomial, even though the root itself is fairly far off. For instance suppose the polynomial is (x-1)4, but numerically you came up with roots of 0.99, 1.01, 1+0.01i, 1-0.01i. When you multiply it out you've got the right coefficients to 10-8 but the roots are off by 0.01! However if you redo the calculation at a higher precision, you should notice the roots moving around a lot.
If you're running your algorithm and it looks like it is providing numbers, it would be really, really easy for you to be simply producing wrong numbers due to a bug you didn't track down. Multiplying it out is an excellent sanity check.
| [reply] |
Re: Internal Rate of Return
by pajout (Curate) on May 19, 2009 at 19:39 UTC
|
Having no idea how to proof it now, I am not absolutely sure...
Anyway, it maybe depends on payment signs. If just one payment is negative and remaining ones are positive (or oppositely), there is, I guess, just one solution of IRR. Do you really need to process more irregular payments? | [reply] |
|
|
Hmm. 1 -4x + x*x has 2 roots, 2+-sqrt(3) which are about 0.267949192431123 and 3.73205080756888.
But a pair of related statements are true.
The simpler to prove is that if the constant term is negative, and the others are nonnegative, then there is only one real root. Existence is easy to prove. Uniqueness falls out of the fact that the derivative is positive over the real numbers, so the function is monotone increasing. Therefore it can only be zero once.
The harder variation is that if the largest power has a negative coefficient and the others are non-negative then there is only one positive real root. Proving this directly gets messy. The short solution is to substitute y = 1/x and then multiply by a high enough power of y to get back to the former case with a negative constant coefficient and all of the others non-negative. This mapping is a bijection on the positive reals, and we are back to the easier statement to prove.
If you're trying to model something like an investment in a bond, this is sufficient to prove uniqueness. However with more complicated cash flows, you've got a complicated problem.
| [reply] [d/l] [select] |
|
|
I have series of irregular negative payments followed by one positive payment. I would reorganize the data somewhat if I could solve the more general case where some of the intermediate payments were also positive, but this isn't necessary.
I hadn't suspected, until a few days ago, that this was a difficult problem. All the solutions I have looked at so far recognize that they will not always provide an answer, even when there is one. They generally say little or nothing about when there is or isn't a solution or the circumstances under which they will fail to find an existing answer. It would be helpful to know more about such constraints.
Unfortunately, I am not a mathematician and have difficulty understanding many of the texts I have found. What I need are some simple guidelines and constraints to go along with the algorithms.
To put this in perspective - the Excel XIRR function has been accepted as a solution thus far. But I don't see how to reliably find the maximum and minimum valued solutions where there are more than one and I don't know how likely it is to fail or what characteristics of the data would induce failure (other than leading payments of 0, which it doesn't handle at all).
| [reply] |
|
|
| [reply] |
|
|