apotheon has asked for the wisdom of the Perl Monks concerning the following question:

I ran across something earlier today wherein someone held forth on the subject of the syntactic supremacy of Haskell over just about all other languages, in a piece titled on syntax. The thesis of this thing was, in essence, that the "language of the future" should have a syntax like Haskell's, because Haskell's syntax is best.

I'm skeptical, of course. As a means of demonstrating how great Haskell's syntax is, however, he decided to use a simple fibonacci number calculation subroutine, implemented in a number of different languages. His Perl example looked like this:

sub fibo { my ($n, $a, $b) = (shift, 0, 1); ($a, $b) = ($b, $a + $b) while $n-- > 0; $a; }

I find that to be unnecessarily ugly. I edited it a little bit for my own purposes, adding code to actually use the subroutine and reformatting it to eliminate a little aesthetic discomfort thusly:

use strict; use warnings; use Memoize; memoize('fib'); foreach my $x (0..$ARGV[0]) { print fib($x); } sub fib { my ($n, $a, $b) = (shift, 0, 1); ($a, $b) = ($b, $a + $b) while $n-- > 0; $a; }

Even so, I don't really think it's ideal. I wrote another version of it from scratch, in a manner I thought would be much prettier:

use strict; use warnings; use Memoize; memoize('fib'); foreach my $x (0..$ARGV[0]) { print fib($x); } sub fib { my $n = shift; return $n if $n < 2; return fib($n - 1) + fib($n - 2); }

So, I do have a question:

Is there any particular reason the Haskell guy's version is better than my recursive version?

print substr("Just another Perl hacker", 0, -2);
- apotheon
CopyWrite Chad Perrin

Replies are listed 'Best First'.
Re: a better fibonacci subroutine
by shmem (Chancellor) on Dec 15, 2006 at 08:49 UTC
    It's a matter of taste of course, but I can see no ugliness in the "Haskell guy's" version. It's clean, understandable perl code. I'd advise against the use of $a and $a for other things than sorting, though. That would be my only critic (and it applies to your code, too).

    As of which is better - yours is recursive, his is iterative. You need to use Memoize, he doesn't. You are spending more resources than he is.

    The other way round - what makes your version better, apart from "prettiness" (whatever that means)?

    --shmem

    _($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                                  /\_¯/(q    /
    ----------------------------  \__(m.====·.(_("always off the crowd"))."·
    ");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}

      One thing that strikes me immediately about his version is that it almost requires one-letter, or at least greatly abbreviated, variable names to keep line lengths manageable. I also tend to find that reducing the number of variables scattered throughout a snippet of code pretties things up and makes them easier to reason through, thus enhancing that ephemeral quality "readability". Further, the complexity of the code in general is greater in the iterative version, as measured in discrete syntactic elements, which strikes me as (all else being equal) a usually useful metric of pleasant succinctness of code.

      print substr("Just another Perl hacker", 0, -2);
      - apotheon
      CopyWrite Chad Perrin

Re: a better fibonacci subroutine
by jbert (Priest) on Dec 15, 2006 at 09:55 UTC
    I think both solutions are clear and readable.

    They have different efficiency tradeoffs - yours trades memory for faster future lookups. It will probably be slower any time you significantly raise the N for which you called fibs, but will become a quick lookup for all values below that. The other solution recalculates each time, but requires small, constant storage.

    I've been playing with haskell a little recently, it's an interesting language. There seems to be a greater emphasis on case analysis in the language, as exemplified by the pattern matching approach to function definition. For example, one way (perhaps not efficient) way of defining fibs in haskell is:

    fibs 0 = 1 fibs 1 = 1 fibs n = fibs (n - 1) + fibs (n - 2)
    which looks a lot like a mathematical description of the function.

    As to whether the above is efficient or not, it depends on whether fibs is memoized or not. On the face of it, it doesn't appear to be. Then you recall that Haskell is a functional language, with no assignment (ok, it's not that simple), and so the compiler is perfectly at liberty to memoize any functions it wants...and so it may well be the same as your explicitly memoized solution above.

    Whilst we're on the subject of Haskell, you also have to love the ability of lazy evaluation to give you infinite lists of things:

    ones = 1 : ones integers = 1 : map (1+) integers
    The infinite list of 1s? Oh, that's a 1, followed by the infinite list of ones. The infinite list of integers? That's a 1, followed by the infinite list of integers with 1 added to them all.

    Again, it all seems very mathematical - writing down definitions and letting the compiler take care of the details. And again, it probably doesn't stay that simple with real-world programs.

    Roll on perl6.

      There is a large difference between memoizable and memoized. Saying that the compiler is allowed to guess is great - I can get some speed without any effort. But say I'm not getting enough speed. So I profile the sucker, and find too much time wasted on the fibx function. Suspecting that it's not being memoized, I flip a switch to get it memoized, and suddenly I have my speed boost. Question: is there such a switch in Haskell? There definitely is in Perl 5 (use Memoize; memoize('fib');). I'm pretty sure there is in Perl 6 as well (some is flag, I bet).

      That there is a teeny cost to memoizing in perl 5 seems to be less serious to me than the ability to actually control it. For example, I have a memory-intensive program that is allowed to run all night if it needs to. Turning off all memoizations is important - better to run slow than to crash with an out-of-memory error. Being able to say that a function can or cannot be memoized is, to me, very important.

        Well, yes. But that's the same argument for fine-grained control which people have been making ever since the move away from assembler to high level languages.

        There are (nearly) always cases were the higher-level approach suffers in performance, but the gains in expressiveness are worthwhile. (And often the compiler/interpreter can guess better than the programmer - note that hand-coded assembler these days is often slower than well-compiled C, because of x86 architectural issues).

        As another example - purely functional code has no side effects and so the compiler is permitted to parallelize it across multiple CPUs or cores. One can argue that one would prefer the control of explicit threads and locks, but the effort and error rate in doing so is likely to be high.

        Whether current compilers have all these features or not (I'm not sure), fundamentally, functional code is more amenable to aggressive optimisation and parallelisation - as well as allowing high levels of expressiveness.

        Imagine a perl6 pragma which told parrot that the enclosing lexical block contained no side-effects (or a code analysis phase which determined the same thing) and allowed the loop in the block to automagically run on all 4 cores of your 2007 CPU.

Re: a better fibonacci subroutine
by herby1620 (Monk) on Dec 15, 2006 at 21:28 UTC
    You have two ways of getting the result. The iterative case (Haskell's) and the recursive case (yours). The Haskell method is very easy to follow, just write the values for $a and $b while it iterates. The recursive solution works as well, but (big but here!) it calls the 'fib' function greater than the fibonacci times for every number you wish to get (except for the first two cases). This number grows VERY quickly. It is similar to a fibonacci sequence, but with different starting points
    Call      Number of calls   Calls
    fib(1)    0                 Internal
    fib(2)    0                 Internal
    fib(3)    2                 fib(2), fib(1)
    fib(4)    4                 fib(3), [fib(2), fib(1)], fib(2)
    fib(5)    6                 fib(4)[4]... fib(3)[2]...
    fib(6)   10                 fib(5)[6]... fib(4)[4]...
    fib(7)   16                 fib(6)[10]... fib(5)[6]...
    fib(8)   26                 fib(7)[16]... fib(6)[10]...
    
    While this DOES demonstrate recursion, it can consume resources. Being a bit of an 'old school' type programmer who grew up with machines with limited resources using an iterative solution is perfered (to me!).