in reply to Re^6: Is it worth using Monads in Perl ? and what the Monads are ?
in thread Is it worth using Monads in Perl ? and what the Monads are ?
For your test, I ran this:
#! perl -slw use strict; sub f { my $a = shift; my $term = 1; my $sum = 1; my $i; for( $i=0; $sum <= $a; $i++) { $term = $term + 2; $sum = $sum + $term; # print "i:$i term:$term sum:$sum"; } return $i; } print "f($_) = ", f( $_ ) for -4 .. 36;
The terminating value for the test loop may indicate to you that I had a good idea of what it was doing before I ran it. I still haven't looked up the book. I'd term f as 'the integer root', but there is probably some proper term for that.
Now I'll turn the page...back.
So, not far wrong, but did I pass the test? I read on a few pages and it makes great play of how the Z axiomatic definition carries more information--such as the possible ranges of inputs and outputs from f()---but that ignores that the C definition could also have captured that information.
unsigned int f( unsigned int a ) { unsigned int i, term, sum; term=1; sum=1; for (i=0; sum <= a; i++) { term=term+2; sum=sum+term; } return i; }
And actually, that carries more information. For one thing, on practical computer systems, variables are of finite size and can therefore only deal with finite ranges of values. It maybe mathematically convenient to reason about functions in terms of the infinite range of natural numbers, but in practice, that reasoning falls down because real hardware overflows, underflows and otherwise does not behave as mathematical theory would have it behave.
The Z definition won't tell you that for inputs above 2**31, (using the original C formulation on a 32-bit processor) that you are going to get bizarre outputs because the intermediate term sum will overflow.
In the Total FP paper, the author says:
RULE 1) All case analysis must be complete. So where a function is defined by pattern matching, every constructor of the argument type must be covered and in a set of guarded alternatives, the terminating ‘otherwise’ case must be present. In the same spirit any built in operations must be total. This will involve some non-standard decisions - for example we will have
0 / 0 = 0
Yeah, right! Good luck with that.
Now how hard do I have to think to come up with a scenario where the undetected, unreported, silent conversion of erroneous input into a mathematically convenient lie causes the reactor to go critical or the patient to receive a massive dose of something lethal? But the mathematicians are happy, so what the hey! Again, just a dramatisation.
Have you taken a look at Prolog?
Yes. I did a Prolog course at college back before the dawn of time. And more recently, about 10 years ago, I had to do some real work with an inferencing engine that used a dialect of Prolog. It does take a very different mindset. And the last time I frequently wrote short brute force C programs to verify the results I was getting from the inferencing engine. They usually ran longer, but they were much faster to produce and I had far more confidence in the results.
Oh, sure, there are going to be some enthusiastic advocates of any language,
It's not "enthusiastic advocates" that disturb me.
This claim is not restricted to appearing in fly-by-night blogs of enthusiastic advocates. I think this claim is bogus. I think I proved it was bogus using evidence from one of your favorite papers above.
It may increase either or both, for those parts of Haskell programs that are purely functional, and for mathematicians and those that are used to thinking in that way.
But, as I've alluded to elsewhere in the thread, Haskell programs--every useful Haskell program--does have side effects. It may deal with them better than many other FP languages, and possibly other non-FP languages, but they are still there.
But is there any hard evidence that useful Haskell programs are more bug free than other languages?
It may do so, for expert Haskell programmers but if the World Programming Council banned all programming languages except pure lazy functional languages tomorrow, how productive would the worlds (1 million?) programmers be the day after? Or the week after? Or 1 year from now?
How many of the worlds programmers would successfully make the transition from imperative to functional programming?
Even when they had, they would still have to program in a world where data gets corrupted; where strings have to mutate into integers and reals; where disks fail; communications channels go away; where files are bigger than memory and have the wrong line endings; where DoS attacks abound; where input are accidentally supplied in inches instead of millimetres.
Will their conversion to Haskell make them any more capable of dealing with these situations?
Even within the field of Maths itself, most of the best math packages are still written in Fortran.
IMO Haskell should be downplaying the arguments about whether the rules underlying Monads are correct, or whether MonadPlus is better. And whether Lazy is a good or a bad thing.
I didn't get to go through the Xmonad stuff, but there doesn't appear, at first glance at the page you linked, to be any tutorial associated with it? I did say show me the code, but went on to say "But more importantly, show me how you arrived at it."
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^8: Is it worth using Monads in Perl ? and what the Monads are ?
by Anonymous Monk on Jun 17, 2007 at 04:35 UTC | |
by BrowserUk (Patriarch) on Jun 17, 2007 at 06:03 UTC | |
by Anonymous Monk on Jun 21, 2007 at 01:37 UTC |