in reply to Re: Doing "it" only once
in thread Doing "it" only once
Additionally, thanks for the history lesson. In a nutshell, this type of functionality has been intentionally removed from some languages so these types of optimizations wouldn't happen - interesting.
Cheers - L~R
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Doing "it" only once
by BrowserUk (Patriarch) on Sep 21, 2005 at 17:34 UTC | |
...this type of functionality has been intentionally removed from some languages so these types of optimizations wouldn't happen... Yes, but as I said, FP language compilers are now making a virtue of it :) The very thing that makes porting FP techniques into Perl such a waste of time, is that Perl cannot hope to come close to performing anything like the depth of analysis (at runtime) that (say) the GHC can perform at compile time. For example, memoization may seem like an 'advanced technique' in Perl's terms, but when you consider that every single function in Haskell is automatically memoized by the compiler, and moreover, every parameter of every function (currying) also. And this is done such that once a function is called with a given (set of) parameter(s), the code to call the function is actually overwritten with the resultant value, you begin to see how the efficiency is achieved. The mechanism of overwriting the function invocation with the value means that once the function has been called, every single subsequent call of the function (with the same parameter(s)) throughout the program, now simply retrieves the value of the result. And this is done such that there is not even a conditional test, never mind additional function calls to implement the memoization as is required by some Perl implementations. It is the referential integrity built in to the Haskell language that allows GHC to analyse the entire program structure and perform extensive graph reduction with value substitution. The dynamic nature of Perl, the need for introspection and the need to support mutable data, make many of the optimisations possible in FP languages, impossible in Perl. It would be nice to think that P6 would be able to detect the (referentially pure) conditions that would allow it to substitute value for function calls after the first invocation, but it will require some extremely clever code in the compiler/interpreter to perform that trick. In terms of Perl 5, it actually make more sense to model the function invocations in terms of a tied hash. The code just uses the appropriate value of the hash $fp{ parameter }, and the tie mechanism fills in the value if it isn't already present. Subsequent calls for the same value are very efficient (pro rata the inefficiency of tied hashs). You end up with a lazy (never calculated if never called for), strict (only calculated once), and very perlish mechanism that is a much better fit with the language that some others that attempt to emulate the source code semantics of FP languages without recognising that the absence of the optimising compiler renders them horribly inefficient. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
| [reply] [d/l] |
by Anonymous Monk on Sep 21, 2005 at 20:29 UTC | |
For example, memoization may seem like an 'advanced technique' in Perl's terms, but when you consider that every single function in Haskell is automatically memoized by the compiler, and moreover, every parameter of every function (currying) also.Well, technically this is incorrect. Functions in Haskell are not memoized. You can see this is trivially true by considering the '+' function defined for the integers. Memoizing a measly 1 billion different arguments would easily exahust the memory of a 32 bit machine. ...(be sure to turn on optimizations!). You're probably thinking of the fact that data stuctures (like lists) are memoized by nature of the lazy graph rewriting semantics. See also, MemoisingCafs and Functional Programming and Parallel Graph Rewriting. | [reply] [d/l] |
by BrowserUk (Patriarch) on Sep 22, 2005 at 17:32 UTC | |
Hmm. I guess I stand corrected, but maybe you would explain the following:
When compiled without optimisations, the program runs out of memory when trying to sum the integers from 1 to 10,000,000. So, as you advised, I compile with optimisations. Now it performs the calculation in around 3 seconds. The first time. It then takes around 2 seconds to calculated the sum of 1 .. 5,000,000. Obviously, the intermediate value was not memoized. But ... Ask it to calculate 1 .. 10,000,000 a second time, and it now takes no time at all! One interpretation of this empirical evidence is that without optimisations enabled, it runs out of memory trying to retain all the intermediate return values. With the optimisations enabled, it only retains those values that are returned to the caller, and not the intermediate values.. The truth is (probably) somewhat more along the lines that the optimiser replaces the recursive algorithm with an iterative loop, thereby doing away with any recursive calls, and therefore, also doing away with the memoization of the intermediate function returns--as there are none. This would be a more convincing argument if I knew how to get more accurate timestamping, but none the less, both the empirical evidence and my (shallow) understanding of GHC internals, support the statement I made--every single function in Haskell is automatically memoized by the compiler--though I should have added a note to the effect that this only works for function calls that are not optimised away. If you have the time/inclination to explain this further I would be happy to have my understanding clarified. If you prefer to this in by email or elsewhere, use the address on my homenode. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.
| [reply] [d/l] |
by Anonymous Monk on Sep 26, 2005 at 22:21 UTC | |