in reply to Re^4: GHC shuffle more efficient than Perl5.
in thread Why is the execution order of subexpressions undefined?

Thankyou Autrijus for clarifying what I was (clumsily) trying to say.

Once you accept that side-effects are impossible to avoid, you can conceal them, segregate them, and move them to other places, but you cannot avoid them. You have to deal with them.

The distinction between Haskell (and FP languages in general) and imperative languages (exempified by Perl), are in how you choose to deal with them.

My only statement that could be considered "against" Haskell, is that personally, I prefer the permissive syntax and semantics of Perl as my tool for doing so, than I do, (what I consider to be, in my admittedly breif experiences of them), the restrictive syntax and semantics of Haskell and other FP languages.

That is a biased viewpoint. In part by my greater experience with imperative langauges in general and Perl in particular. In part, by my feelings and opinions regarding the barrier that is formed between the general (and even the programming) population by viewing the world through opaque glasses of mathematical notation.

I see the need for mathematical notation. Just as I believe that it is necessary for programmers to read, understand and use the full expressive power of their chosen computer languages, so I see that for the mathematician, the short-hand of math notation is their tool that allows them to express and convey (to other equally conversant practitioners) their ideas and concepts.

But, just as the artist has no need for the constraints and precision of the draughtsmans tools to express and encapsulate his/her ideas, I don't think that the programmer needs the tools and nomenclature of the mathematician to express and encapsulate theirs.

It may be mathematically correct to view the length of a string as a recursive function that processes the string as a list of characters counting them until the list is empty. And it may well be that a compiler can optimise this operation to the point where the counting is only ever done once. And the fact that the compiler actually reduces the operation to storing the length, as state in concert with the string, and adjusts it whenever the length of that string changes, whether through visible or concealed side-effect, is actually exactly what Perl does. It just advertises that fact rather than conceals it.

At the final analysis, it comes down to which syntax you prefer to use. Through my inherent bias, which we all have, I prefer the imperative view of the world to the functional. I find it more natural.

But my preference, nor my stating of that preference on this site, dedicated as it is to Perl, in no way belittles the power of FP, anyones preference for FP, or the frankly awe-inspiring use you have made of that power to the good of Perl.

I am in awe. Of it, and you, and the use you have made of it.

You have my thanks for your work and the benefits that it has already accrued and continues to accrue to the future of Perl.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco.
Rule 1 has a caveat! -- Who broke the cabal?
  • Comment on Re^5: GHC shuffle more efficient than Perl5.

Replies are listed 'Best First'.
Re^6: GHC shuffle more efficient than Perl5.
by audreyt (Hermit) on Apr 23, 2005 at 08:01 UTC
    Thanks for your kind words. In light of this thread's topic, I'd like to add that, because functions in pure FP never have side-effects, their reduction order can be entirely undefined. Actions defined by those pure functions, however, are defined as always sequential when main is executed.

    This fact makes concurrency and automatic parallelization much, much easier to reason about; as an example, you can trivially write deadlock-free, composable, highly efficient concurrent code, using shared variables and inter-thread channels, without worrying about side effects going astray.

    See http://homepages.inf.ed.ac.uk/wadler/linksetaps/slides/peyton-jones.ppt for more information... I think it will be enlightening :-)

      I read the STM slides you linked and I have a follow up question:

      Slide 26 says "STM is aimed at shared memory [applications]..." and slide 31 says "You can't do IO in a memory transaction...", which leads me to ask, what is the point of concurrency that cannot do IO?

      The basic premise of concurrency is to overlap communications with computations. You utilise the time spent communicating to do something else useful.

      In the case of the example used in the slides, that of a banking system, the instructions for increasing and decreasing an account's total would, in the real world, be coming from external systems--clearing houses, ATM's, check processing in branch work rooms etc.--and any but the smallest bank would have to be using multiple (many) systems to process the volumes of transactions and accounts in real time.

      If STM is restricted to dealing only with in-memory concurrency, it seems to be of little real-world application beyond simulations?


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco.
      Rule 1 has a caveat! -- Who broke the cabal?
        Well, the trick is you can go into and out of STM monad from the IO monad at any time, since STM is essentially a subset of IO. So you enter STM when concurrent atomicity is required, and do the real IO (say, writing to screen) outside it.

        But it is true that, although STM can automatically scale over SMP settings, it still assume essentially a shared memory model; that is why it's called a concurrency tool instead of a (cross-machine) parallelizing tool, which has other fault-tolerance factors to consider.

        However, for its targetted use (that is, a compelling replacement over select loops and thread locks/semaphores), STM is still damn useful.

        Um, "overlapping computation and communication" isn't the point of concurrency *at all*. The point of concurrency is that transistors (and thus parallel computation) is incredibly cheap, but that specifying a program as "do X *then* do Y (etc)" forces all those transistors to sit idle waiting for X to complete before they can start doing Y. That's of tremendous real-world importance. The difficultly of writing parallel applications is the *only* reason we don't all have 8 or so processors on our machines and so execute our "real world apps" 1-8 times faster. Instead we spent a tremendous amount of transistors "guessing" what might be done next, because we can't "actually" do Y until X has finished.
      ... because functions in pure FP never have side-effects, their reduction order can be entirely undefined. Actions defined by those pure functions, however, are defined as always sequential when main is executed.

      Yes. In one sentence you have summed up the requirement for defined EO in languages, or those parts of a langauge, that have side-effects. Where it gets awkward, is to see how that enables the parallelisation of functions that can have side-effects, when they are a part of the same expression.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco.
      Rule 1 has a caveat! -- Who broke the cabal?
Re^6: GHC shuffle more efficient than Perl5.
by Anonymous Monk on Apr 23, 2005 at 14:09 UTC
    One way in which Haskell is potentially less restrictive is the freedom to define your own monads - while some need a degree of compiler or FFI support (IO, ST, STM etc etc), many other useful monads can be built on top of existing language features and provide useful semantics for whichever problem you have at hand. For a couple of domain-specific examples, the Parsec library is good and many type checkers use a monad to provide unification and related services.