in reply to Re^3: RFC: feature proposal re code in @INC
in thread RFC: feature proposal re code in @INC
Try working with code like this. What does it do?
It calls foo with the argument $x to get something which when stringified will give us a codref from a hash. That coderef is then called with an argument $a and is presumably expected to return a codefragment of some sort. We then eval the code frament but only if $b is true.
What it does is pretty clear. Why it is doing what its doing is not clear. But thats to be expected of a three line fragment with useless variable names.
I don't see the point you are making. Although I can see how coderefs in @INC could be a pain to debug.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^5: RFC: feature proposal re code in @INC
by Anonymous Monk on Jan 27, 2006 at 16:36 UTC | |
You know the how: the scaffolding that will set up the coderefs and the eval. You don't know the what: the important details about what coderef will be executed, which string will be evaled, nor what that code will really *do*. And that can be hard to find out. Tied variables mean that any statement involving variables could do absolutely anything unless I go back to the variable definition (often thousands of lines away), and find out if it's tied, and what it's tied to. Coderefs mean I don't get to know the name of the function being called. Evals mean that the code to be run can be hidden in a string built up entirely at run time. Maybe I'm just old fashioned, but I'd rather just see simple function calls with a few simple if statements. If I at least had that, I'd at least have the call tree as a framework for debugging: as it is, any function of in one of many, many modules might be called; none of the functions might be called (all dead code, with live code invented at runtime), and it will take a lot of investigation to find out the truth. At least with a call stack, I can just walk down the stack to find out what does what to what, and start making my guesses as to why. And, you're right; you also don't know why the code does what it does, nor if it is correct behaviour. That's an additional concern, and a serious one; but not knowing what the code does in the first place makes trying to analyze for correctness hard. *shrug* I don't know if I've made my point clearer, but at least I tried.
-- | [reply] |
by BrowserUk (Patriarch) on Jan 27, 2006 at 18:33 UTC | |
Tied variables mean that any statement involving variables could do absolutely anything unless I go back to the variable definition (often thousands of lines away), and find out if it's tied, and what it's tied to. Coderefs mean I don't get to know the name of the function being called. Evals mean that the code to be run can be hidden in a string built up entirely at run time. I get where you are coming from, but this time I can't say I agree with you. When reading a piece of code that uses tied variables, I see little difference between not knowing immediately exactly how the value returned is derived and not knowing immediately how the value returned from a subroutine or method is derived. Okay, the name of the subroutine/method may give you some clue, but equally, so should the name of the tied variable. In either case, the name may be spot on or a complete misnomer. As for knowing whether the variable is tied or not and the possibility that the declarations "often thousands of lines away". With Perl's ability to locally scope stuff, if that is the case, fire the programmer. For subs and methods, the actual code could be thousands of lines away; in a different module; in a different language. Perhaps the biggest difference between the two is that with the tied var, you get to name it in a way that makes sense in the context in which it is used, where as subroutines/method names are decided entirely by the writer of the module providing them, and will therefore tend to be generic names that makes sense in terms of what the module does generically, rather than in terms of how you are using it locally. Coderefs (otherwise known as 'High Order' &| 'First Class' functions) are (IMO, but also in the opinion of a lot of other people too), are the greatest innovation in programming since the word "structured" got tagged in front. Again, with suitable naming, there should be little mystery what the code behind a coderef is doing. And again, the name can be chosen to make sense in terms of the local context rather than some far off genericity. And if scoping is done properly, you shouldn't have far to backtrack to find out where the actual code lives or is generated. String eval is somewhat different as reflected by the condemnation it receives when people use it unnecessarily, but there are some times when it is the expedient choice. On those occasions, using reasonable variable names (not $x $y and definitely not $a $b), goes a long way to illuminating the purpose of the code. Perhaps the biggest problem with string eval is that the text that shows up in stack traces (eg. Died at (eval 14) line 1 is less than useful in tracking down where in the body of code the eval statement resides. However, on those rare occasions when string eval is useful, there are ways of providing for better information, and given the rarity, it's quite worth the extra effort. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
by blazar (Canon) on Jan 28, 2006 at 15:38 UTC | |
Maybe I'm just old fashioned, but I'd rather just see simple function calls with a few simple if statements. But then that would not be Perl. It would be a subset of Perl almost certainly guaranteed not to make use of its expressive power. You may also program in the particular language that subset maps into, for what it's worth. Of course this doesn't scale well with the fact that you may have to program in Perl. I can't comment on that... except that if you applied for a a job as a Perl programmer you should be supposed to be familiar with the language and its peculiarities including alas! its corner cases and duouble-alas! misuses. Don't misunderstand me: this not to say that your fears and concerns expressed in such a colorful manner above are fantasies of yours. I can understand them quite well, however not only -as chromatic correctly remarked- you can "write terrible awful nearly obfuscated code" in less feature-rich languages too, but you can write perfectly clean an maintainable code in a feature-rich language as well; even exploiting those (tricky) features you are scared by. In other words these are mostly orthogonal concepts! Now your claims fundamentally amount to the belief that there's an implication between feature-richness and tendency to write "bad" code. Of course such a cause-effect relationship does exist, but although it is difficult to quantify this kind of things, my judgment, and the common perception here, are that it is of a much much smaller entity than you seem to think. Actually, in my experience, bad code I had to deal with was not bad because of the (ab)use of "advanced" features. It was bad because of "basic" shortcomings, e.g. no use of strict and warnings, unchecked opens and so on. In particular "bad" code -in my acceptation- indeed often features string evals, always in situations in which it is not needed by any means. And I can understand your concerns with tied, variables, although variables do not tie themselves on their own. But I still can't understand what scares you in closures. Care to give an example? | [reply] |
by Anonymous Monk on Jan 31, 2006 at 19:23 UTC | |
A technical writer doesn't set out to make use of all the "expressive power" of the English language; (s)he seeks clarity and aims to be understood by his/her target audience. So, too, a programmer should aim for clarity, and only write what will be understood. In both cases, using simple language wherever possible is going to make life easier for the person trying to understand later on. Now your claims fundamentally amount to the belief that there's an implication between feature-richness and tendency to write "bad" code. Of course such a cause-effect relationship does exist, but although it is difficult to quantify this kind of things, my judgment, and the common perception here, are that it is of a much much smaller entity than you seem to think. I can only speak for my own experiences; but they've been almost uniformly bad. It's not just that feature-richness provides a tendancy to write bad code; it's the general fact that giving someone more than they need becomes awkward. Look at the tendancy for long cords to become tangled; it's the same principle with coding -- more is not better. In both cases, the snarls and tangles don't have to happen: but they do happen on a regular basis. Actually, in my experience, bad code I had to deal with was not bad because of the (ab)use of "advanced" features. It was bad because of "basic" shortcomings, e.g. no use of strict and warnings, unchecked opens and so on. Those are very annoying problems to solve; but not hard ones. Depending on the needs of the program, you might, for example, replace all uses of open() with a version with exception handling of some sort (dies, warns, throws an exception object). The problems I'm facing deal more with the fact that the code itself tells me almost nothing about what the code does; it's all run time decisions that are hidden by as many layers of abstraction as possible. Checks for things that can't happen are layered in with things that can and must be checked; and the run time state has become a total maze of objects, global flags, internal stacks, and code hidden in code references and closures. In particular "bad" code -in my acceptation- indeed often features string evals, always in situations in which it is not needed by any means. And I can understand your concerns with tied, variables, although variables do not tie themselves on their own. But I still can't understand what scares you in closures. Care to give an example? Well, closures are coderefs, and that means guessing *which* code a variable currently refers to. Add to that the burden of figuring out the scoping of the closure, and you've got much more cognitive burden than tracing a simple function call. Unless there's no reasonable way around it, I much prefer the simple and obvious solution. And of course, all of my complaints are interrelated. It's not just closures in and of themselves; it not just the ties, it's not even just the evals (though they really suck!). It's the fact that I'm dealing with codrefs (and occasionally closures) that are generated at run-time by evals with class names being returned by objects that are determined by hash lookups to tied variables that are eventually tied to something or other using hash lookups and evals; a very simplified version of the main loop runs something like this:
As you can tell, the main section of the code does nothing other than provide some run-time scaffolding to obscure what's going on. The closures and coderefs, the object syntax, the ties, the evals: they all combine to make something hard to understand practically impossible to understand. Any one of them by themselves would be bad enough (especially eval), but together they make it all just horrible. I'm gaining very little personal benefit from most of Perl's advanced or interesting features, because I need to write Perl that's accessible to a beginner perspective wherever practical to do so. So far, it's been practical to do so, with exactly one exception in two years.1 On the other hand, every time there's a new feature added to perl, there's one more feature I have to remember to watch out for, just in case some idiot has (ab)used it.
-- 1 I once wrote a pair of redefine() and restore() functions that override and restore the definition of a function at run time. I use it to simplify testing, by doing separate unit testing of parent functions from their child functions. (ie. If the children return a given value, does the parent function correctly?). I carefully separated the potentially confusing code out into it's own module, documented the functions and their intended purpose, and documented the codebase as well. | [reply] [d/l] |
by blazar (Canon) on Feb 01, 2006 at 08:46 UTC | |
by Anonymous Monk on Feb 01, 2006 at 16:57 UTC | |
| |