in reply to Re^3: Is it worth using Monads in Perl ? and what the Monads are ?
in thread Is it worth using Monads in Perl ? and what the Monads are ?
They are, emotionally, just a way of concealing the fact that Haskell programs contain code that has state, has side effects and is procedural.I'd reword that a little differently. The type system loudly proclaims to everyone that these particular functions are contaminated by state and side effects, so by-golly you'd better don the bio-hazard suit and procede with caution.
it could be coded in a normal, procedural style with loops and state and side-effects; and without all that faffing around that monads introduce.I think there's probably a large subjective element in choosing a favorite programming. Some people like cats, others like dogs. When I first encountered Haskell, my initial reaction was, "Yes! This is how programming was intended to be done". I no longer needed to faff (?) around with side effects willy-nilly everywhere.No need to (badly) re-invent all the standard procedural constructs--like try/catch et al.
It sounds like you've had the opposite reaction. No harm, no foul. If you don't like broccoli, there is no sense in feeling bad just because others rave about it. And anyone who nags you about the health benefits of broccoli is a bore.
Haskell programmers truly think it is a better way to program. No apologies. (Heck, maybe we're just crazy). And some of us think that Haskell doesn't even go far enough.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^5: Is it worth using Monads in Perl ? and what the Monads are ?
by BrowserUk (Patriarch) on Jun 16, 2007 at 22:36 UTC | |
"... so by-golly ... (Heck, maybe we're just crazy)." Watch a lot of Scrubs by any chance? :) First. I completely agree. Each to their own and no harm no foul. That said, there is a lot of Haskell I like, and despite billing myself upfront as a failed Haskell programmer, I still hold out hope of making progress in using it. Though I may need to transition to it through one or two other languages first. Clean is at the top of my list at the moment. The biggest problems I have with clean are the IDE and the library documentation/tutorials which concentrate too much on the graphical IO capabilities. If you are the same anonymonk who wrote this, then the paper you cite is far and away the best example of the type of criticism I was levelling against many of the Haskell tutorials. Of course, it isn't a tutorial and isn't aimed at a non-math audience, so it can be forgiven for its use of notation and abstract theory without explanation. None the less, it serves as an example. To quote, selectively from that document:
Much of which is summed up by another quote: "The seemingly close fit between program text and mathematical reasoning accounts for a large part of the appeal of functional languages...." This is my primary angst with Haskell, and the whole FP movement in general. The theory goes that by making a language resemble existing math notations, and have the compiler use reduction to produce the code, one can go directly from mathematical proof to bug-free code and achieve programming utopia. (Yes. I've exaggerated for effect.) But let's think about mathematical proofs for a few moments. In the summer of 1993, after seven years of dedicated work, and more than 30 of increasingly serious and adept casual interest, Andrew Wiles announced that he had climbed the Mount Everest of mathematics and solved the problem that had alluded all of the best minds in his field for the preceding 200 or more years. He had proved Fermat's Last Theorem. It was hailed by the press, and within the mathematics community as a triumph. But, over the next few months, it was Wiles himself that discovered the flaw in the proof. Of course, he then went on to correct the proof, and get it verified by his peers, and will go down in history as the man that climbed that mountain. But, how many peers has he? How many people are there that can actually say they understand his proof enough to verify it? Another way of asking that question is how many people could, if presented with the original and the final proofs, could work out which was which, unless they were already familiar with them? 1000? 100? 10? Again, that is an extreme example, but the point is that 'proofs' can be wrong. And it is much harder to verify a proof than a program. You can run a program and subject its results to tests. Something you cannot easily do with formal mathematical notation. Of course, a big part of the desire for FP is the ability to have a compiler that takes standard mathematical notation and converts it to a running program. But then, you not only have to verify the notation, you also have to verify the compiler that does the conversion, and the results it produces, and the results that what it produces, produce. And that's where I was coming from when I wrote in my post above:
But it is stated more formally at the end of the Total FP paper you cited: Theorem: For any language in which all programs terminate, there are always-terminating programs which cannot be written in it - among these are the interpreter for the language itself. Going on to conclude:
And that's my problem with much of the hyperbole that surrounds and infuses FP. This paper is saying that "we don't need to deal with errors, exceptions, dirty data etc.", or "need a language that is Turing complete" (elsewhere in the paper) except on "rare occasions", but that just doesn't make sense to me. Eg. There is that co-recursive proof that a number is odd (or even). And that converts nicely into a pure Haskell program. But, even if we assume that the compiler will convert that program into correct code, as soon as we need to fetch that number into the program, rather than embed it, all bets are off. The program may not terminate, because the user never enters a number and hits enter; or the file make contain text, or be empty; or the socket connection may have dropped; or... In a practical programming language, as opposed to an academic research language, you cannot ignore the inconvenient truth of reality. I remember Pascal in its early days where 'files' were just internal data structures and you didn't have to deal with the outside world. great for teaching,but totally impractical for anything commercial. Inevitably, along came Borland with Turbo-Pascal and there were some amazing and important programs written using Pascal. Later (I think), that became Delphi, and (I think) that is still being used. Haskell is already a powerful, elegant, practical programming language. It doesn't need to sell itself on the basis of lofty, theoretical(*) goals. It is already "condemned to be successful". Like Quantum::Superpositions, once the hype has faded and gone, what you are left with are some very important, and very practical, useful ideas and code that handle the real world with aplomb and stand up along side other real world languages on that basis. (* And I would say, unobtainable--but there are a lot of very bright minds, much brighter than me, that are pursuing them, so I'm probably going to be proved wrong!) What is missing, to my mind and in my experience of looking for on-line Haskell tutorials, is a description of a moderately complex, real-world, warts-and-all problem, and a worked solution. Forget all the cutesy, convenient (even if theoretically interesting and important) examples of Fibonacci series, and quick-sorts, and NDA language parsers, and tree structures. These are just a bad as corporate/genetic hierarchies examples in the OO world. Work through something useful. In one or two of SPJ papers there is reference to a Haskell programmed HTTP server. 1500 lines long, with near Apache like performance but less features. Now that would make a fine basis for a tutorial with fully worked example. The source code is probably around somewhere for download, but that's much less than half of what's required. What is needed is insight into the mind of the expert Haskell programmer on how they approach tackling such a project. Given the HTTP/1.1 spec as a starting point and the standard library, where do they start? How do they proceed? What mistakes did they make; how did the compiler inform them of those mistakes; how did they interpret those error messages and how did they resolve them? Now that would be a tutorial that might allow me to make the transition to the mode of thinking required. F*** all the theory behind monads, or the type system, or strict -v- lazy -v- pure. Show me the code. But more importantly, show me how you arrived at it. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
by Anonymous Monk on Jun 17, 2007 at 01:05 UTC | |
That said, there is a lot of Haskell I like, and despite billing myself upfront as a failed Haskell programmer, I still hold out hope of making progress in using it. Though I may need to transition to it through one or two other languages first.Have you taken a look at Prolog? It requires a very different mindset, yet without static typing or monads. And it is much harder to verify a proof than a program. You can run a program and subject its results to tests. Something you cannot easily do with formal mathematical notation. Of course, a big part of the desire for FP is the ability to have a compiler that takes standard mathematical notation and converts it to a running program. But then, you not only have to verify the notation, you also have to verify the compiler that does the conversion, and the results it produces, and the results that what it produces, produce.Hmm. You don't have to prove anything. You can still run your functional program and test it like you normally would (also take a look at: QuickCheck). But now you *get* the option to prove (informally) and reason about your programs if you so desire. It's an extra bonus feature that you don't get with an imperative program (New and Improved! Now with 20% more features!) Although I'm probably getting *really* OT, here's an excerpt I like from "The Way of Z: Practical Programming with Formal Methods" by Jonathan Jacky: Many programmer believe that fomal specification are not useful. They believe that the program text -- the code itself -- can be the only really complete and unambifuous description of what a program does. This vew holds that a formal specification is nothing more that the program written over again in another language. It misinterprets Z to be some kind of very high-level programming language.You can find the answer on page 34 by searching for the book on books.google.com. And that's my problem with much of the hyperbole that surrounds and infuses FP.I think our biases must be pretty different. Oh, sure, there are going to be some enthusiastic advocates of any language, but other than a few fly-by-night blog posts, I have a hard time seeing the hyperbole that surrounds and infuses FP. This paper is saying that "we don't need to deal with errors, exceptions, dirty data etc.", or "need a language that is Turing complete" (elsewhere in the paper) except on "rare occasions", but that just doesn't make sense to me.Hmm. Maybe experience comes to play here also. I 100% agree with the paper (incidently it one of my favorite CS papers, my number one favorite probably being Can Programming be Liberated from the von Neumann Style?). Most of the programs I write (in any language, and I use plenty of Perl) are about (guestimating) 90+% purely functional in nature (engineering analysis mostly). In fact, I don't think I've probably ever professionally written a program where I didn't think I had at least a rough idea complexity of the algorithm. Show me the code.Maybe something like xmonad, is real world, yet small enough to get your feet wet? | [reply] [d/l] |
by BrowserUk (Patriarch) on Jun 17, 2007 at 01:29 UTC | |
For your test, I ran this:
The terminating value for the test loop may indicate to you that I had a good idea of what it was doing before I ran it. I still haven't looked up the book. I'd term f as 'the integer root', but there is probably some proper term for that. Now I'll turn the page...back. So, not far wrong, but did I pass the test? I read on a few pages and it makes great play of how the Z axiomatic definition carries more information--such as the possible ranges of inputs and outputs from f()---but that ignores that the C definition could also have captured that information.
And actually, that carries more information. For one thing, on practical computer systems, variables are of finite size and can therefore only deal with finite ranges of values. It maybe mathematically convenient to reason about functions in terms of the infinite range of natural numbers, but in practice, that reasoning falls down because real hardware overflows, underflows and otherwise does not behave as mathematical theory would have it behave. The Z definition won't tell you that for inputs above 2**31, (using the original C formulation on a 32-bit processor) that you are going to get bizarre outputs because the intermediate term sum will overflow. In the Total FP paper, the author says:
Yeah, right! Good luck with that. Now how hard do I have to think to come up with a scenario where the undetected, unreported, silent conversion of erroneous input into a mathematically convenient lie causes the reactor to go critical or the patient to receive a massive dose of something lethal? But the mathematicians are happy, so what the hey! Again, just a dramatisation. Have you taken a look at Prolog? Yes. I did a Prolog course at college back before the dawn of time. And more recently, about 10 years ago, I had to do some real work with an inferencing engine that used a dialect of Prolog. It does take a very different mindset. And the last time I frequently wrote short brute force C programs to verify the results I was getting from the inferencing engine. They usually ran longer, but they were much faster to produce and I had far more confidence in the results. Oh, sure, there are going to be some enthusiastic advocates of any language, It's not "enthusiastic advocates" that disturb me.
IMO Haskell should be downplaying the arguments about whether the rules underlying Monads are correct, or whether MonadPlus is better. And whether Lazy is a good or a bad thing. I didn't get to go through the Xmonad stuff, but there doesn't appear, at first glance at the page you linked, to be any tutorial associated with it? I did say show me the code, but went on to say "But more importantly, show me how you arrived at it." Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
by Anonymous Monk on Jun 17, 2007 at 04:35 UTC | |
by BrowserUk (Patriarch) on Jun 17, 2007 at 06:03 UTC | |
| |
by BrowserUk (Patriarch) on Jun 17, 2007 at 12:58 UTC | |
I dearly hope that you will come back and see this. I cannot thankyou enough for the link to Backus. I understand why this is your favorite. I think it may well become one of mine. Actually I think it already is, and I haven't finished reading it yet. What a refreshing change from most of the other papers I have been reading. Clear points, made with clear explanations and clear examples. Where he uses algebraic notation, each symbol is briefly described, in-situ. By the time I finished section 11, I understood why I so want to like FP. Simplicity, brevity, clarity. Thankyou. 30 years on, his presentation style shows why the greats are. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
by Anonymous Monk on Jun 17, 2007 at 15:36 UTC | |
by BrowserUk (Patriarch) on Jun 17, 2007 at 19:15 UTC | |
| |