in reply to Re: Iterative vs Recursive Processes
in thread Iterative vs Recursive Processes

... it is theoretically impossible for Perl 5 to be tail-recursive.

I disagree with this, or at least with the sentiment I inferred from it. While tail-recursion optimisation could not be turned on by default for the reasons you cite, I see no reason that this and other optimisations could not be introduced optionally by means of lexically-scoped pragmas.

In principle, a more complex approach would also be possible: always to perform the optimisation, but to retain enough information to emulate the theoretical caller stack on demand. I wouldn't recommend this approach though - for a simple recursion, you'd probably need to retain only the number of levels of recursion to emulate, but it would be much more difficult to cope with even slightly more complex cases, such as a subroutine that called itself from two different places, or a pair of co-recursive subroutines.

Hugo
  • Comment on Re: Re: Iterative vs Recursive Processes

Replies are listed 'Best First'.
Re: Re: Re: Iterative vs Recursive Processes
by BrowserUk (Patriarch) on May 12, 2003 at 22:22 UTC

    Are there algorithms that would benefit from tail recursion optimisation -- which I have always taken to mean that they left nothing in the stack frame by way of intermediate data or state that was required when unwinding -- that aren't readily and easily re-cast as interative algorithms in the first place?

    Factorial isn't a great example, but it is a common one

    sub factorial { my ($f,$n) = (1,shift); $f *= $n-- while( $n ); $f; }

    To my eyes, even with 1-char variable names and no comments, this is so much clearer than the equivalent recursive routine that I can't understand why I would do it any other way.

    The big hole in my language experience is any serious use of the lisp-ish branch of the family. To my shame, I never got passed the 'Loads of Infuriatingly Stupid Parenthesis' (and rediculously terse and meaningless keywords), state-of-mind. I tried a couple of times to go back and address this ommision, but I encounter the same Aaaaarg! sensations every time.

    The whole tail-recursion debate seems to stem from those languages where recursion is used almost exclusively (it's an exageration I know, but not so much of one) to perform any form of repetition. Early on, they found that using recursion for essentially iterative algorithms caused explosive memory usage to little benefit, so they invented tail-recursion. The invention was very successful in decreasing the inefficiencies of this practice, and so the original bug-fix-hack, has taken on the status of an, algorithmically, 'fundementally good idea'+.

    So my question is: Can anyone educate me, by showing me an algorithm that is better (by some definition) implemented using a tail-recursive method than an iterative method.

    + I am not the originator of these notions, but I couldn't find an authoratative quote anywhere, and as I am no longer in contact with the person who expressed them, convincingly, to me, I'm quite happy to take the rap for them.


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller

      I'm no expert on the theory, and have never used lisp, but many algorithms I find far easier to think about in terms of recursion, which means the resulting code (when written in those terms) reads much more naturally and clearly to me.

      Let's take that factorial example, and consider what happens if you memoize it - if you calculate factorial(100), you've done in passing the work required to calculate the factorials of 1 .. 99, and the recursive variant will memoize those results as well for no extra work, while the iterative one won't.

      The thing is, recursion is a tool like any other, and I wouldn't refuse a better screwdriver simply because I've already got a good hammer.

      Hugo

        S'fair enough:) Though I wasn't suggesting that recursion itself was non-useful. I agree that there are many algorithms which are naturally recursive and are much easier to both think of and implement using recursion. Almost anything to do with trees is a good example, but in all the cases I can think of where this is so, retaining state at intermediate points in the repetition is an integral part of the algorithm. Ie. Intrinsic to the delivery of the immediate final result. As opposed to a potential, future final result as can be delivered using the memoize case you cite.

        I've not had the occasion to use Memoize in anger, but I read some of Dominus' arguments for it and Streams and stuff, and it is certainly a neat way of speeding up existing code (amongst other uses), but I was struck that it would be cheaper in memory and quicker to just re-code the original function to use a lookup table, though I can see this doesn't have the infinite list feature, but then I don't have infinite memory either:).

        I very definately agree that dismissing any better screwdriver out-of-hand is silly. I was just hoping I might get some good counterpoints to the tail-recursion-is-a-hack -elevated-to-goodness argument that I was pretty much convinced by several years ago.


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
      Sometimes, no, and yes. :-)

      In a few languages, Scheme being one of them, there are no built-in iteration constructs. They all have to be built up from recursion. In that case it is impossible to recast them as iteration from the ground up, and so tail-recursion is clearly better than the non-existent alternative.

      Of course that doesn't hold for most languages. (It doesn't even hold for Scheme if you are willing to ignore how certain iteration constructs are actually built.) In that case tail-recursion is just a way of having the compiler automatically rewrite recursive algorithms as iterative ones. What is done automatically for you by the compiler is perfectly doable by you by hand, so it is fundamentally impossible to come up with any tail-recursive solution which is not rewritable as an iterative one.

      However there are problems which are easier to figure out by recursion than they are by iteration. This is generally not obvious in the simple cases where the translation is trivial and familiarity makes iteration always look cleaner, no matter what. But in more complex cases it may be easier to work out the recursive solution, and it is darned easier to take an existing recursive solution and have it automatically optimized for you than to go through and do the optimization by hand.

      This is doubly true if (as often happens) you have a recursive solution some of whose callframes can be eliminated and some of which cannot. Trying to optimize by hand becomes a real bear as you start mixing different styles of code, recursion where iteration won't work and iteration where you can optimize the recursion and headaches (plus probably a few bugs) scattered everywhere.

        I hadn't realised that tail-recursion optimisations could operate in a 'mixed environment'. That is to say, the last time I took the time to investigate the mechanisms involved, they seemed to only operate in an all-or-nothing manner. I remember several long articles/series in Byte, JoACM and others about techniques for ensuring that tail-recursion wasn't inhibited. That only goes to show how long I have been suffering under the delusion that TR wasn't a useful technique outside of those languages that make a virtue of it.

        I'm long overdue taking the time to re-visit the issue I think. Thanks. Now I'm off in search of a nice self-contained example of a mixed recursion/TR algorithm to play with:)


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller

        In a few languages, Scheme being one of them, there are no built-in iteration constructs.

        I think that this is the source of many people fascination with tail recursion. It appeals to the minimalist mentality very much so, (who needs iteration when you have an tail-recursion optimizing compiler). Personally I find such minimalism distateful, which is probably one of the reasons I like Perl.

        I have to say that I think iterative solutions tend to have less suprise bugs than recursive. My experience is that recursive solutions often hide hard to find bugs in exceptional cases, but that the same bug in an iterative solution usually results in the algorithm failing completely. So personally I think that reducing recusrion as much as possible is a good thing. I am also perfectly ready to admit that many others who I hold in considerable respect diasgree. :-) I will grant though that as the code scale increases removing recursion becomes more and more difficult.


        ---
        demerphq

        <Elian> And I do take a kind of perverse pleasure in having an OO assembly language...
      That's fine for a factorial function. Now try writing a Quicksort without using recursion, preferrably in a language like C that doesn't take care of memory management for you intelligently like Perl does.

      Makeshifts last the longest.

        I wasn't saying that recursion was not useful. Many, many algorithms that use divide-and-conquor strategies are not only much simpler implemented using recursion, in many cases the only way to avoid the recursion is to emulate the stack frames in order to retain state. This emulation is almost always more complex, usually slower and always more bug prone than the clean simplicity of the recursive version.

        My question was, is there an algorithm that benefits from tail-recursion optimisation that isn't easily recast to an iterative solution. Tilly mentioned some that can benefit from TR at some iterations, but not others.

        I think that QuickSort may be one such algorithm.

        I'm pretty certain there is no strategy that would allow the QuickSort algorithm to benefit from all-or-nothing TR, as the pivot points are determined on the way in, but still required on the way out. However, whilst the pivot point is still required when returning from the first recursion to the Qsort call, the return from the second call could, in theory at least, return back two stack frames as the localised state information is no longer required. This is a situation where transparent TR would benefit the algorithm. (I think).


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller
Re: Re: Re: Iterative vs Recursive Processes
by tilly (Archbishop) on May 12, 2003 at 20:16 UTC
    I agree that if you want to optionally change caller's behaviour, then you can get tail-recursion to work.

    However I am having extreme difficulty in seeing how, even in principle, one could make your more complex approach work. One could save memory by compressing the information in the caller stack. But I fail to see how one could regenerate the intermediate results in a complex computation on demand without redoing the computation - and that only works if you readily know (which you don't in Perl) that the computation is side-effect free.

      For tail-recursion I'm not sure what intermediate results you would ever need to recreate? Maybe it's just the time of day, but I can't off the top my head think of any way to get at that information in current perl.

      Let me take a simple example:

      sub factorial { my $n = shift; my $mult = @_ ? shift : 1; return $mult if $n < 2; return factorial($n - 1, $mult * $n); }

      Ah, now I see the problem: I had forgotten about the 'invisible' caller() arguments, @DB::args, which are used (for example) by Carp to show the parameters each subroutine was called with in a stack trace. Ok, scratch that approach. :)

      Hugo