in reply to a better fibonacci subroutine

I think both solutions are clear and readable.

They have different efficiency tradeoffs - yours trades memory for faster future lookups. It will probably be slower any time you significantly raise the N for which you called fibs, but will become a quick lookup for all values below that. The other solution recalculates each time, but requires small, constant storage.

I've been playing with haskell a little recently, it's an interesting language. There seems to be a greater emphasis on case analysis in the language, as exemplified by the pattern matching approach to function definition. For example, one way (perhaps not efficient) way of defining fibs in haskell is:

fibs 0 = 1 fibs 1 = 1 fibs n = fibs (n - 1) + fibs (n - 2)
which looks a lot like a mathematical description of the function.

As to whether the above is efficient or not, it depends on whether fibs is memoized or not. On the face of it, it doesn't appear to be. Then you recall that Haskell is a functional language, with no assignment (ok, it's not that simple), and so the compiler is perfectly at liberty to memoize any functions it wants...and so it may well be the same as your explicitly memoized solution above.

Whilst we're on the subject of Haskell, you also have to love the ability of lazy evaluation to give you infinite lists of things:

ones = 1 : ones integers = 1 : map (1+) integers
The infinite list of 1s? Oh, that's a 1, followed by the infinite list of ones. The infinite list of integers? That's a 1, followed by the infinite list of integers with 1 added to them all.

Again, it all seems very mathematical - writing down definitions and letting the compiler take care of the details. And again, it probably doesn't stay that simple with real-world programs.

Roll on perl6.

Replies are listed 'Best First'.
Re^2: a better fibonacci subroutine
by Tanktalus (Canon) on Dec 15, 2006 at 15:31 UTC

    There is a large difference between memoizable and memoized. Saying that the compiler is allowed to guess is great - I can get some speed without any effort. But say I'm not getting enough speed. So I profile the sucker, and find too much time wasted on the fibx function. Suspecting that it's not being memoized, I flip a switch to get it memoized, and suddenly I have my speed boost. Question: is there such a switch in Haskell? There definitely is in Perl 5 (use Memoize; memoize('fib');). I'm pretty sure there is in Perl 6 as well (some is flag, I bet).

    That there is a teeny cost to memoizing in perl 5 seems to be less serious to me than the ability to actually control it. For example, I have a memory-intensive program that is allowed to run all night if it needs to. Turning off all memoizations is important - better to run slow than to crash with an out-of-memory error. Being able to say that a function can or cannot be memoized is, to me, very important.

      Well, yes. But that's the same argument for fine-grained control which people have been making ever since the move away from assembler to high level languages.

      There are (nearly) always cases were the higher-level approach suffers in performance, but the gains in expressiveness are worthwhile. (And often the compiler/interpreter can guess better than the programmer - note that hand-coded assembler these days is often slower than well-compiled C, because of x86 architectural issues).

      As another example - purely functional code has no side effects and so the compiler is permitted to parallelize it across multiple CPUs or cores. One can argue that one would prefer the control of explicit threads and locks, but the effort and error rate in doing so is likely to be high.

      Whether current compilers have all these features or not (I'm not sure), fundamentally, functional code is more amenable to aggressive optimisation and parallelisation - as well as allowing high levels of expressiveness.

      Imagine a perl6 pragma which told parrot that the enclosing lexical block contained no side-effects (or a code analysis phase which determined the same thing) and allowed the loop in the block to automagically run on all 4 cores of your 2007 CPU.

        Perl 6 already supports those semantics for things like hyperoperators and junctions, and pugs already implements it if you have multiple CPUs.

        There will also be an is cached trait to tell Perl that a routine may be treated as effectively pure. There is no commitment here to the level of caching, however. The optimizer could, for instance, trade an LRU cache for a complete cache to trade memory for time if the potential parameter space is large.

        Or you can write your own caching, of course. Here's the current self-caching version of fibonacci. It works in pugs:

        sub fib (UInt $n) { (state @seen = 0,1,1)[$n] //= fib($n-1) + fib($n-2); }
        This is likely to be more efficient than automatic caching because we know that the arguments are not going to be sparse integers. Also, the explicit "state" declaration tells the optimizer in no uncertain terms that this is not pure code.

        Another Perl 6 solution down in the pugs examples directories looks like this:

        sub fib (UInt $n, UInt $curr = 0, UInt $next = 1) { return $curr unless $n; return fib($n-1, $next, $curr + $next); }
        That's essentially just using tail recursion to implement the standard loop algorithm. The optimizer could probably deduce that it's pure code and maybe memoize it. As you point out, whether that's a win or not really depends on context.

        Anyway, the point is that Perl 6 will certainly support TMTOWTDI when it comes to the amount of control you desire, and over how functionally you want to think. As specced (but not yet implemented), you can even do the really pretty version in Perl 6:

        multi fib (0) { 0 } multi fib (1) { 1 } multi fib (UInt $n) { fib($n-1) + fib($n-2); }
        Admittedly, that's not quite as spare as the Haskell version, mostly because I hate dangling syntax, which Haskell is full of. I like things to have a beginning, a middle, and an end. "Once upon a time, Perl 6 happened, and they all lived happily ever after." :)