in reply to Re^4: What would you change?
in thread What would you change?

my $thing = 'a' x 10; my @params = ( 2, 3 ); vec( $thing, @params ) = 0;
That results with "Not enough arguments for vec()." And, illustrates in brilliant colors the stupidity of compile-time checking in a dynamic language.

Frankly, if we wanted to have compile-time checks, we should be telling the compiler what we expect to happen. So, for example, if @params should have 2 and only 2 things, we should be able to say that. Then, vec() knows that if @params makes it to him, it's got two things. But, even that sucks because you'd have to assign two things to @params - you wouldn't be be able to build it up using push.

So, compile-time checks suck.


My criteria for good software:
  1. Does it work?
  2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

Replies are listed 'Best First'.
Re^6: What would you change?
by BrowserUk (Patriarch) on May 19, 2008 at 20:57 UTC
    That results with "Not enough arguments for vec()." And, illustrates in brilliant colors the stupidity of compile-time checking in a dynamic language.

    I see what you mean. And boy, do I ever agree with you regarding attempts to force static language semantics upon dynamic languages.

    Playing with this, my first thought was to disable the prototype checking:

    &vec( $thing, @params ) = 1;

    but that resulted in: Can't modify non-lvalue subroutine call which came as a complete surprise.

    I never knew that the l-valueness of a subroutine was allied to its prototype. The best alternative I came up with is:

    sub myvec :lvalue { CORE::vec( $_[ 0 ], $_[ 1 ], $_[ 2 ] ) }

    which once you get past the deliberate error ;) in the example:

    my $thing = 'a' x 10; my @params = ( 2, 3 ); myvec( $thing, @p ) = 1;; Illegal number of bits in vec

    seems to work fine. Of course, you pay a performance penalty for the indirection, but hey. CPU cycles don't matter :)


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      never knew that the l-valueness of a subroutine was allied to its prototype

      It's not. &vec calls vec of the package in which you happen to be, not the core function.

      >perl -e"&vec()" Undefined subroutine &main::vec called at -e line 1.

      The (presumably non-existant) vec in your current package is not an l-value sub, thus the error messag. You can't use & on core functions.

        Good point and (probably) a good call, but I'm now wondering about the difference betweem the error message you got and that I got: Can't modify non-lvalue subroutine call ...?

        I'll have to see if I can reproduce the situation. I was running my REPL at the time.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
      The more I think about this vs. Haskell/Scala, the more I come to the conclusion that compile-time checking can actually work in a dynamic language. We just have to be smart about what we're checking. I mean, Haskell has compile-time checking all over the place and it's dynamic.

      I think the point here is that the more side-effect free you can be, the more comprehensive your checking can be. I'm not sure where this meander is going, though.


      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
        I mean, Haskell has compile-time checking all over the place and it's dynamic.

        Hm. Perhaps we are at crossed threads after all. The concept of Haskell being a dynamic language goes right over my head. Indeed, if that article is anything to go by, I'd say that it was damn nearly impossible for Haskell to:

        • extend the program, by adding new code;
        • extend objects and definitions;
        • modify the type system;

        all during program execution.

        I would say that it runs exactly contrary to the entire motivation of Haskell.

        It's interesting to conceive of what it would take, and the processing power it would require, for Haskell to eval generated statements at runtime and cross-reference (and inference) the types generated, with whatever remains of the type annotations that where present in the source code of the original program.

        AFAIK, the whole basis of term rewriting is that once type compatibility has been verified, the expansion and/or reductions of terms can be directly substituted for the terms they rewrite, and so large chunks of the definitions in the original source code don't make it into the object code in a recognisible form whatsoever. It would therefore be impossible to extend the compile-time type system at runtime.

        I realise that a Haskell program can be written to parse, compile and run (or interpret) a different type system (language; ala. pugs), but a Haskell program that could eval code at runtime and then extends its own type system on the fly, is a whole different ball of wax.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.