in reply to a better fibonacci subroutine
They have different efficiency tradeoffs - yours trades memory for faster future lookups. It will probably be slower any time you significantly raise the N for which you called fibs, but will become a quick lookup for all values below that. The other solution recalculates each time, but requires small, constant storage.
I've been playing with haskell a little recently, it's an interesting language. There seems to be a greater emphasis on case analysis in the language, as exemplified by the pattern matching approach to function definition. For example, one way (perhaps not efficient) way of defining fibs in haskell is:
which looks a lot like a mathematical description of the function.fibs 0 = 1 fibs 1 = 1 fibs n = fibs (n - 1) + fibs (n - 2)
As to whether the above is efficient or not, it depends on whether fibs is memoized or not. On the face of it, it doesn't appear to be. Then you recall that Haskell is a functional language, with no assignment (ok, it's not that simple), and so the compiler is perfectly at liberty to memoize any functions it wants...and so it may well be the same as your explicitly memoized solution above.
Whilst we're on the subject of Haskell, you also have to love the ability of lazy evaluation to give you infinite lists of things:
The infinite list of 1s? Oh, that's a 1, followed by the infinite list of ones. The infinite list of integers? That's a 1, followed by the infinite list of integers with 1 added to them all.ones = 1 : ones integers = 1 : map (1+) integers
Again, it all seems very mathematical - writing down definitions and letting the compiler take care of the details. And again, it probably doesn't stay that simple with real-world programs.
Roll on perl6.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: a better fibonacci subroutine
by Tanktalus (Canon) on Dec 15, 2006 at 15:31 UTC | |
by jbert (Priest) on Dec 15, 2006 at 16:15 UTC | |
by TimToady (Parson) on Dec 15, 2006 at 18:28 UTC |