Re: Re: Iterative vs Recursive Processes
by hv (Prior) on May 12, 2003 at 20:11 UTC
|
... it is theoretically impossible for Perl 5 to be tail-recursive.
I disagree with this, or at least with the sentiment I inferred from it. While tail-recursion optimisation could not be turned on by default for the reasons you cite, I see no reason that this and other optimisations could not be introduced optionally by means of lexically-scoped pragmas.
In principle, a more complex approach would also be possible: always to perform the optimisation, but to retain enough information to emulate the theoretical caller stack on demand. I wouldn't recommend this approach though - for a simple recursion, you'd probably need to retain only the number of levels of recursion to emulate, but it would be much more difficult to cope with even slightly more complex cases, such as a subroutine that called itself from two different places, or a pair of co-recursive subroutines.
Hugo
| [reply] |
|
|
Are there algorithms that would benefit from tail recursion optimisation -- which I have always taken to mean that they left nothing in the stack frame by way of intermediate data or state that was required when unwinding -- that aren't readily and easily re-cast as interative algorithms in the first place?
Factorial isn't a great example, but it is a common one
sub factorial {
my ($f,$n) = (1,shift);
$f *= $n-- while( $n );
$f;
}
To my eyes, even with 1-char variable names and no comments, this is so much clearer than the equivalent recursive routine that I can't understand why I would do it any other way.
The big hole in my language experience is any serious use of the lisp-ish branch of the family. To my shame, I never got passed the 'Loads of Infuriatingly Stupid Parenthesis' (and rediculously terse and meaningless keywords), state-of-mind. I tried a couple of times to go back and address this ommision, but I encounter the same Aaaaarg! sensations every time.
The whole tail-recursion debate seems to stem from those languages where recursion is used almost exclusively (it's an exageration I know, but not so much of one) to perform any form of repetition. Early on, they found that using recursion for essentially iterative algorithms caused explosive memory usage to little benefit, so they invented tail-recursion. The invention was very successful in decreasing the inefficiencies of this practice, and so the original bug-fix-hack, has taken on the status of an, algorithmically, 'fundementally good idea'+.
So my question is: Can anyone educate me, by showing me an algorithm that is better (by some definition) implemented using a tail-recursive method than an iterative method.
+ I am not the originator of these notions, but I couldn't find an authoratative quote anywhere, and as I am no longer in contact with the person who expressed them, convincingly, to me, I'm quite happy to take the rap for them.
Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
| [reply] [d/l] |
|
|
I'm no expert on the theory, and have never used lisp, but many algorithms I find far easier to think about in terms of recursion, which means the resulting code (when written in those terms) reads much more naturally and clearly to me.
Let's take that factorial example, and consider what happens if you memoize it - if you calculate factorial(100), you've done in passing the work required to calculate the factorials of 1 .. 99, and the recursive variant will memoize those results as well for no extra work, while the iterative one won't.
The thing is, recursion is a tool like any other, and I wouldn't refuse a better screwdriver simply because I've already got a good hammer.
Hugo
| [reply] [d/l] [select] |
|
|
|
|
Sometimes, no, and yes. :-)
In a few languages, Scheme being one of them, there are no built-in iteration constructs. They all have to be built up from recursion. In that case it is impossible to recast them as iteration from the ground up, and so tail-recursion is clearly better than the non-existent alternative.
Of course that doesn't hold for most languages. (It doesn't even hold for Scheme if you are willing to ignore how certain iteration constructs are actually built.) In that case tail-recursion is just a way of having the compiler automatically rewrite recursive algorithms as iterative ones. What is done automatically for you by the compiler is perfectly doable by you by hand, so it is fundamentally impossible to come up with any tail-recursive solution which is not rewritable as an iterative one.
However there are problems which are easier to figure out by recursion than they are by iteration. This is generally not obvious in the simple cases where the translation is trivial and familiarity makes iteration always look cleaner, no matter what. But in more complex cases it may be easier to work out the recursive solution, and it is darned easier to take an existing recursive solution and have it automatically optimized for you than to go through and do the optimization by hand.
This is doubly true if (as often happens) you have a recursive solution some of whose callframes can be eliminated and some of which cannot. Trying to optimize by hand becomes a real bear as you start mixing different styles of code, recursion where iteration won't work and iteration where you can optimize the recursion and headaches (plus probably a few bugs) scattered everywhere.
| [reply] |
|
|
|
|
|
|
| [reply] |
|
|
|
|
|
|
I agree that if you want to optionally change caller's behaviour, then you can get tail-recursion to work.
However I am having extreme difficulty in seeing how, even in principle, one could make your more complex approach work. One could save memory by compressing the information in the caller stack. But I fail to see how one could regenerate the intermediate results in a complex computation on demand without redoing the computation - and that only works if you readily know (which you don't in Perl) that the computation is side-effect free.
| [reply] |
|
|
For tail-recursion I'm not sure what intermediate results you would ever need to recreate? Maybe it's just the time of day, but I can't off the top my head think of any way to get at that information in current perl.
Let me take a simple example:
sub factorial {
my $n = shift;
my $mult = @_ ? shift : 1;
return $mult if $n < 2;
return factorial($n - 1, $mult * $n);
}
Ah, now I see the problem: I had forgotten about the 'invisible' caller() arguments, @DB::args, which are used (for example) by Carp to show the parameters each subroutine was called with in a stack trace. Ok, scratch that approach. :)
Hugo | [reply] [d/l] [select] |
Re^2: Iterative vs Recursive Processes
by adrianh (Chancellor) on May 13, 2003 at 13:58 UTC
|
Using caller you are supposed to be able to generate a complete stack backtrace
Wouldn't:
Be aware that the optimizer might have optimized
call frames away before "caller" had a chance to
get the information. That means that caller(N)
might not return information about the call frame
you expect it do, for "N > 1". In particular,
@DB::args might have information from the previous
time "caller" was called.
(from caller's docs) let you out of that particular hole? Tail recursion would just be another case to add to the existing ones. Not entirely backwards compatable, but not so bad as to disallow the concept I would have thought...
| [reply] |
|
|
Good point.
However even so I would argue against turning it on naively. You see in Perl part of cleaning up a call-frame is destroying temporary lexical variables and undoing invocations of local. When the things destroyed are objects that may have side-effects in their DESTROY methods (see ReleaseAction for an example where this happens) then trying to optimize tail-recursion creates a significant change in what the code does.
Which brings us to another theme. A lot of the theory folks argue for side-effect free programming (this is what "functional programming" really means - as opposed to the bad meme that I started) because (among other things) it gives the compiler far greater opportunity to optimize and reorder how things happen. (Proponents argue that it also makes big programs easier to understand.) However we naive humans find avoiding side-effects a difficult discipline, which makes a lot of theoretical optimizations very hard to apply in practical languages.
As Larry Wall said, optimization is a form of cheating, which is why you always get caught. And the more free-form the language is, the more requirements you put on the compiler, and the easier it is to catch it out when you try to cheat. :-)
| [reply] |
|
|
However even so I would argue against turning it on naively.
Oh I agree. It's not something that can be globally switched on for perl 5 (and the lousy speed for subroutine calls makes tail-recursive code less appealing anyway). I just don't think caller is a problem :-)
When the things destroyed are objects that may have side-effects in their DESTROY methods (see ReleaseAction for an example where this happens) then trying to optimize tail-recursion creates a significant change in what the code does.
True, but naive scope-related DESTROY code is going to be in trouble when we get proper GC in perl6 anyway :-)
| [reply] [d/l] |
|
|