in reply to Re^2: RFC: feature proposal re code in @INC
in thread RFC: feature proposal re code in @INC

Well, I wouldn't put ties and closures on the same level as evals, especially if you mean string-eval.

They're all potentially evil. Try working with code like this. What does it do?

$x = $h { foo($y) }; $z = &$x( $a ); eval $z if ( $b );

I don't even know; and I invented this example. :-)

Note that for large hashes and complex, 1000 line functions with obscure return values, you won't really know what that code actually does; it's very hard to figure out what's going on, until runtime (and sometimes not even then -- we've got a special abstraction layer that re-implements a turing machine in software. Don't ask! And the system's too slow! Gof figure!). Welcome to my world. :-(

Note also that if $h is a tied hash, it can change the values of $a and $b. Heck, it could even change the meaning of the code to be evaluated in $z.

Perl's nature is intrinsically that of a polymorphic, complex, dynamic language with a rich syntax and semantic.

It's been growing every more complex every year for the last ten years; and I felt had too much unnecessary complexity back in 1996. Expanding the language rather than cleaning it up and simplifying the messy parts hardly seems a winning propositon to me. It feels like we've been doing a nosedive in wrong direction... and I worried that Perl 6 is mostly cool new features, without concern for how they'll be (mis)used.

You have to live with that!

Maybe, but I don't have to like it. :-( And it takes a massive amount of lobbying effort to get management to change languages, so I'm probably stuck with Perl; a fringe language like Io will need greater support (and much better documentation) before I'll be allowed to code in it/learn it.
--
Ytrew

Replies are listed 'Best First'.
Re^4: RFC: feature proposal re code in @INC
by demerphq (Chancellor) on Jan 27, 2006 at 10:56 UTC

    Try working with code like this. What does it do?

    It calls foo with the argument $x to get something which when stringified will give us a codref from a hash. That coderef is then called with an argument $a and is presumably expected to return a codefragment of some sort. We then eval the code frament but only if $b is true.

    What it does is pretty clear. Why it is doing what its doing is not clear. But thats to be expected of a three line fragment with useless variable names.

    I don't see the point you are making. Although I can see how coderefs in @INC could be a pain to debug.

    ---
    $world=~s/war/peace/g

      The point I'm making is that you don't know until run-time what functions are ultimately called, or what code will be executed. That's what you need to know to find out if the code is doing what is wanted, and it's completely unclear from those examples what lines of code will be executed.

      You know the how: the scaffolding that will set up the coderefs and the eval.

      You don't know the what: the important details about what coderef will be executed, which string will be evaled, nor what that code will really *do*. And that can be hard to find out. Tied variables mean that any statement involving variables could do absolutely anything unless I go back to the variable definition (often thousands of lines away), and find out if it's tied, and what it's tied to. Coderefs mean I don't get to know the name of the function being called. Evals mean that the code to be run can be hidden in a string built up entirely at run time.

      Maybe I'm just old fashioned, but I'd rather just see simple function calls with a few simple if statements. If I at least had that, I'd at least have the call tree as a framework for debugging: as it is, any function of in one of many, many modules might be called; none of the functions might be called (all dead code, with live code invented at runtime), and it will take a lot of investigation to find out the truth. At least with a call stack, I can just walk down the stack to find out what does what to what, and start making my guesses as to why.

      And, you're right; you also don't know why the code does what it does, nor if it is correct behaviour. That's an additional concern, and a serious one; but not knowing what the code does in the first place makes trying to analyze for correctness hard.

      *shrug* I don't know if I've made my point clearer, but at least I tried.

      --
      Ytrew

        Tied variables mean that any statement involving variables could do absolutely anything unless I go back to the variable definition (often thousands of lines away), and find out if it's tied, and what it's tied to. Coderefs mean I don't get to know the name of the function being called. Evals mean that the code to be run can be hidden in a string built up entirely at run time.

        I get where you are coming from, but this time I can't say I agree with you.

        When reading a piece of code that uses tied variables, I see little difference between not knowing immediately exactly how the value returned is derived and not knowing immediately how the value returned from a subroutine or method is derived. Okay, the name of the subroutine/method may give you some clue, but equally, so should the name of the tied variable. In either case, the name may be spot on or a complete misnomer.

        As for knowing whether the variable is tied or not and the possibility that the declarations "often thousands of lines away". With Perl's ability to locally scope stuff, if that is the case, fire the programmer. For subs and methods, the actual code could be thousands of lines away; in a different module; in a different language.

        Perhaps the biggest difference between the two is that with the tied var, you get to name it in a way that makes sense in the context in which it is used, where as subroutines/method names are decided entirely by the writer of the module providing them, and will therefore tend to be generic names that makes sense in terms of what the module does generically, rather than in terms of how you are using it locally.

        Coderefs (otherwise known as 'High Order' &| 'First Class' functions) are (IMO, but also in the opinion of a lot of other people too), are the greatest innovation in programming since the word "structured" got tagged in front. Again, with suitable naming, there should be little mystery what the code behind a coderef is doing. And again, the name can be chosen to make sense in terms of the local context rather than some far off genericity. And if scoping is done properly, you shouldn't have far to backtrack to find out where the actual code lives or is generated.

        String eval is somewhat different as reflected by the condemnation it receives when people use it unnecessarily, but there are some times when it is the expedient choice. On those occasions, using reasonable variable names (not $x $y and definitely not $a $b), goes a long way to illuminating the purpose of the code.

        Perhaps the biggest problem with string eval is that the text that shows up in stack traces (eg. Died at (eval 14) line 1 is less than useful in tracking down where in the body of code the eval statement resides. However, on those rare occasions when string eval is useful, there are ways of providing for better information, and given the rarity, it's quite worth the extra effort.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
        Maybe I'm just old fashioned, but I'd rather just see simple function calls with a few simple if statements.

        But then that would not be Perl. It would be a subset of Perl almost certainly guaranteed not to make use of its expressive power. You may also program in the particular language that subset maps into, for what it's worth. Of course this doesn't scale well with the fact that you may have to program in Perl. I can't comment on that... except that if you applied for a a job as a Perl programmer you should be supposed to be familiar with the language and its peculiarities including alas! its corner cases and duouble-alas! misuses.

        Don't misunderstand me: this not to say that your fears and concerns expressed in such a colorful manner above are fantasies of yours. I can understand them quite well, however not only -as chromatic correctly remarked- you can "write terrible awful nearly obfuscated code" in less feature-rich languages too, but you can write perfectly clean an maintainable code in a feature-rich language as well; even exploiting those (tricky) features you are scared by. In other words these are mostly orthogonal concepts!

        Now your claims fundamentally amount to the belief that there's an implication between feature-richness and tendency to write "bad" code. Of course such a cause-effect relationship does exist, but although it is difficult to quantify this kind of things, my judgment, and the common perception here, are that it is of a much much smaller entity than you seem to think.

        Actually, in my experience, bad code I had to deal with was not bad because of the (ab)use of "advanced" features. It was bad because of "basic" shortcomings, e.g. no use of strict and warnings, unchecked opens and so on.

        In particular "bad" code -in my acceptation- indeed often features string evals, always in situations in which it is not needed by any means. And I can understand your concerns with tied, variables, although variables do not tie themselves on their own. But I still can't understand what scares you in closures. Care to give an example?