http://qs1969.pair.com?node_id=427884


in reply to Re^7: Better mousetrap (getting top N values from list X)
in thread Better mousetrap (getting top N values from list X)

Corrected. Thanks.

You do see the contradiction though?

For an algorithm that produces a subset m of a set N, duplicating N so that you can produce the subset lazily always consumes more storage--not to mention the overhead of tieing the iterator, retaining it's state etc.

<soapbox>

That's one of my bug bears with FP. They claim it produces proveably correct code by avoiding side effects, or putting things into variables and so forth, but what they gloss over is that you still retain state, you just put it all on the either the program stack or continuation stack or some other internal stack where the programmer cannot see it, or test it or verify it. Or control it.

Until they find a way to prove that the FP interpreter or compiler (usually written in C!) cannot corrupt it's own stack, every other claim of proveability is built on sand.

</soapbox>


Examine what is said, not who speaks.
Silence betokens consent.
Love the truth but pardon error.
  • Comment on Re^8: Better mousetrap (getting top N values from list X)

Replies are listed 'Best First'.
Re^9: Better mousetrap (getting top N values from list X)
by sleepingsquirrel (Chaplain) on Feb 04, 2005 at 02:28 UTC
    Until they find a way to prove that the FP interpreter or compiler (usually written in C!) cannot corrupt it's own stack, every other claim of proveability is built on sand.
    Sure, if the compiler has a bug, you're screwed. But how is that any different than the non FP case? It would be heaven-on-earth if we only had to worry about compiler bugs. Just compare how many bugs you find in you're own code, versus bugs in perl. There's a good reason why there are fewer bugs in a compiler than other code written in that language. Because more people use the compiler in more different ways. And most FP compilers (common lisps, haskell, ocaml, etc.) are self-hosted if for no other reason then it shows the compilier writers are willing to eat their own dog food.


    -- All code is 100% tested and functional unless otherwise noted.
      It would be heaven-on-earth if we only had to worry about compiler bugs.

      Ah! But the promise of provably correct code does not imply that you'll never code any errors. It just means if you stick to using proven correct implementations of provably algoithms, then you should only suffer errors at the hands of the compiler, hardware or cosmic rays.

      But first you have to

      • break down your real world application into a combination of provably correct algorithms.

        Even if your application is such that this can be done--doing it is very, very hard.

      • And if any parts of it cannot be satisfied by that set of provable algorithms available, then you have to set about producing same for those parts.

        This is even harder.

      • when you have that out of the way, you then have to implement those algorithms--and then prove that they are correct.

        The theory goes that once you have the algorithm proved, this step should be simple. I contest that.

      But then you have to look at what a provably correct algorithm looks like. The simplest one I have seen is that to determine how long a string is, you store it as a list of characters and then determine the length of the list recursively. The length of the list is 1 + length of the tail of the list.

      Now, if you have to store every string as a (linked?) list of it's individual characters, and then count them recursively everytime you need to know how long that string is--I'm thinking lots of memory and very slow. Imagine trying to process huge XML files that way?

      And most FP compilers (common lisps, haskell, ocaml, etc.) are self-hosted if for no other reason then it shows the compilier writers are willing to eat their own dog food.

      Maybe I am playing with the wrong haskell implementation, but Hugs98 certainly isn't self-hosted. It uses C, and from what I've looked at, it certainly doesn't process it source code in terms of lists. In fact the C source appears to be pretty tighly coded.

      And it is far from fast relative to Perl for example. I cannot imagine it would be any faster if it were written in Haskell.

      It would be an interesting exercise to compare processing an huge XML file with Haskell and the same file with Perl--even using a native Perl parser--but of course you can't, because you cannot get hold of an XML parser written in Haskell. A gazillion implementations of all the classic CS algorithms, but nothing that does real processing in the real world--as evidenced by my searches to date anyway.

      I'd love to eat my words on this. If you know of a Haskell implementation of a XML library (or any other seriously heavy real-world task) I'd love to take a look at it. Everything I have found so far is pretty simple (algorithms; not the code).


      Examine what is said, not who speaks.
      Silence betokens consent.
      Love the truth but pardon error.
        And as long as we're talking perl6 and haskell, I thought we should also mention pugs the beginning of a perl6 interpreter written in haskell.


        -- All code is 100% tested and functional unless otherwise noted.
        But the promise of provably correct code does not imply that you'll never code any errors.
        Beware of bugs in the above code; I have only proved it correct, not tried it.
                            -- Donald E. Knuth in a memo to Peter van Emde Boas
      A reply falls below the community's threshold of quality. You may see it by logging in.