Beefy Boxes and Bandwidth Generously Provided by pair Networks
laziness, impatience, and hubris
 
PerlMonks  

Re: Productivity and Perl

by educated_foo (Vicar)
on Jun 02, 2002 at 00:48 UTC ( [id://170961]=note: print w/replies, xml ) Need Help??


in reply to Productivity and Perl

I read this article, and like anything written by any language's cheerleader, found it annoyingly unfair. Regarding two points that you summarize:
Most languages won't allow you to return a function. Further, if the language doesn't allow lexical scope, then a returning a function wouldn't do you any good.
This is completely unfair -- objects are ugly in Lisp, so you encapsulate your data with closures. Objects are easy in Java, so you bind data to functions with anonymous classes. Yes it's more verbose, but so's everything in Java. The point is that if you can't use closures, it's only going to frustrate you to insist on using them to solve all your problems.

I think the strong typing is mostly a red herring, too. Java's lack of generics (a la C++ templates) does pose a problem, but the static typing seems just fine. Either you want to add related things, in which case they should probably implement the same interface (e.g. "Accumulable"), or you're trying to add unrelated things, in which case you're doomed anyways.

Paul Graham points out that if another language requires 3 times as much code, coding a similar feature will take three times as long.
I wish I were smart enough for the rate of my coding to be limited only by how fast I could type, but sadly that doesn't seem to be the case. Code that is denser and more intricate just takes more time (per line, not per unit of functionality) to produce. Take these two examples:
(define (foo x) (call/cc (lambda (cc) (/ 7 (if (= x 0) (cc 'bad-value) x)))))
versus
int divideSevenBy(int x) { if (x == 0) { throw new DivideByZerException("bad-value"); } return 7 / x; }
While the first is three lines and the second is six, if anything, the second took less time to write, not more. Having to type more characters does take more time, but even in this small example it's not the sole factor, and in a larger project, actual typing time is certainly the least of my worries (It can be easily dwarfed by "time spent figuring what went wrong in nested macro expansions.";)

Don't get me wrong -- the article has plenty of good things to say. But arguments like "my language is best because I can't make yours do things my language's way" deserve a quick trip to /dev/null.

/s

Replies are listed 'Best First'.
Productive, Prolog, and Perl
by Ovid (Cardinal) on Jun 03, 2002 at 03:04 UTC

    educated_foo wrote: arguments like "my language is best because I can't make yours do things my language's way" deserve a quick trip to /dev/null.

    Yes and no. Any time I see a superlative like "any", "none", "everbody", etc., I tend to be suspicious. However, this doesn't mean the argument is completely invalid, just suspect. I think a point that Paul Graham would agree with is that a given tool is likely a superior choice for a problem if it solves that problem more succintly than the available alternatives. Let's consider a very simplistic example.

    Imagine that I want to know what a given person might steal. I might assume that they will steal stuff given the following conditions:

    • That person is a thief.
    • The stuff is valuable.
    • The stuff is owned by someone (how do you steal something if it's not owned by anyone?).
    • And the person doesn't know the person who owns the stuff they might steal.

    If I were programming in Prolog, I might have the following program:

    steals(PERP, STUFF) :- thief(PERP), valuable(STUFF), owns(VICTIM,STUFF), not(knows(PERP,VICTIM)). thief(badguy). valuable(gold). valuable(rubies). owns(merlyn,gold). owns(ovid,rubies). knows(badguy,merlyn).

    It's fairly easy to read, once you know Prolog. :- is read as "if" and a comma is read as "and".

    I can then as what a given person might steal:

    ?- steals(badguy,X). X = gold Yes ?- steals(merlyn,X). No

    So, we can see from this example that the badguy might steal gold and that merlyn will steal nothing (given the information available). Note that at no point did we state that the badguy would actually steal gold. The program was merely able to infer this from the available information. Now, try to program that in Perl, Java, C, etc. You can do it, but it's not going to be nearly as easy or efficient as programming in Prolog.

    From this, I think it is safe to assume that an appropriate lesson should be "my programming language is a good choice for a given problem because I can use the tools it provides to solve the problem faster and easier than most other choices". Thus, we can take your quote and go further and say "my language is a superior choice for a particular type of problem because I can't make yours do things my language's way". Then it comes down to problem spaces and the tools that are appropriate for them. Javascript is often the best choice for client-side Web form validation because it's so widely supported. Java is often the best choice for Web applets for the same reason. Want to write a device driver? Put down Perl and pick up C.

    I think you would agree with that conclusion as you wrote "objects are ugly in Lisp, so you encapsulate your data with closures. Objects are easy in Java, so you bind data to functions with anonymous classes.". While I could be misreading you, I took that to mean that different languages have different ways of naturally arriving at solutions. This implies to me that if a given language's approaches are more suitable for a given problem, then that language is more suitable for said problem. Rebuttals welcome :)

    The danger, of course, lies in believing that "foo" is an appropriate solution for every problem. If we're unwilling to take the time to learn what else is out there, we naturally limit ourselves in how we can conceive of solutions to problems. However, I doubt that Paul Graham believes that Lisp is better for everything. Of course, just as we tend to write for a Perl audience, he writes for a Lisp audience and this makes it easy to misread what is said.

    Cheers,
    Ovid

    Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.

Re^2: Productivity and Perl
by Aristotle (Chancellor) on Jun 02, 2002 at 02:46 UTC
    The funny thing is that Ovid picked this up in a similarly cheerleading fashion because it portrays Perl in a favourable way. :-) And what I'll take home from here, I think, is that
    my language is best because I can't make yours do things my language's way
    type arguments tend to be air-boxing against Perl because Larry generally involves some good bits and pieces from just about every other language and programming paradigma you can think of.

    Makeshifts last the longest.

      One thing I think it would be fair to note is the concept of "problem space". There are many areas for which Perl would be a stupid choice. There are many areas for which Perl would be a good choice, yet other languages would be an even better choice. I suspect that most of what Perl programs typically do right now might be better served by the cleaner syntax of Python, for example.

      Much of the strength in Perl lies in learning the Byzantine modules, scoping issues, context rules, etc. If you're not willing to make that commitment to the language, other choices are superior, period. However, since this is a Perl forum, it doesn't serve me well to bash Perl, which I still love, warts and all.

      Cheers,
      Ovid

      Join the Perlmonks Setiathome Group or just click on the the link and check out our stats.

Re: Re: Productivity and Perl
by Anonymous Monk on Jun 02, 2002 at 14:49 UTC
    Actually the code-length point isn't Paul Graham's. It is Fred Brooks', Paul merely happened to agree with it.

    The point appears in The Mythical Man-Month and was based on audits that showed that programmers in different languages were getting the same amount of code written per day in the end, even though that code did very different amounts of stuff.

    The cause appeared to be that higher-order languages offer higher-order concepts, and as a result the programmers menally "chunk" at a higher level at first, and then find it easier to debug later because there is less code to understand. For a trivial example, how much do Perl's memory management and hashes reduce the amount of code needed when going from C to Perl, and speed up the programmer?

    While it is easy to find plenty of counter-examples (such as your Scheme continuation versus a loop), as a rule of thumb for real projects it really does seem to work.

Re: Re: Productivity and Perl
by ariels (Curate) on Jun 02, 2002 at 09:33 UTC

    Actually, your 2 examples are misleading. Common Lisp (which is what Graham usually writes about) does have catch and throw. On the other hand, it doesn't have <samp>call-with-current-continuation</samp>; that's Scheme.

    Of course, even with Scheme, you wouldn't be using <samp>call/cc</samp> in your code directly; you'd be using the famed macros to have something nicer.

      You caught me -- my "lisp" experience such as it is consists of Scheme and Emacs Lisp. My example may have been inaccurate, but I wouldn't say it is misleading. The point is just that Java expresses the same things in more words (or lines). While writing a function in either language, you spend some time deciding to use a particular approach, then some more typing it in. I'd say the "thinking about it" part takes about the same amount of time for similar things. Then you go to type them in, and the Java one has more lines, but it's probably faster per-line to write. Last time I programmed in Java, I even had a macro for
      } catch (IOException e) { // Loathe Java System.out.println("IO error: " + e); }
      Which gave me 4 lines (ok, 3 lines) almost for free. Certainly, these 4 lines are much more quickly generated than 4 lines of quasiquoted macro glop in Lisp.

      /s

        You caught me -- my "lisp" experience such as it is consists of Scheme and Emacs Lisp.

        This may also have influenced your perception of Lisp's object system; I don't think that CLOS is awkward at all, and it certainly needn't involve mucking about with closures in any way (although an implementor could do it that way if desired, I suppose).

        Also, though Common Lisp does have catch and throw, they're not typically the way (at least in their raw form) one would handle errors. Better to use the condition system which is provided for that very purpose and which is quite powerful. Kent Pitman (one of the authors of the part of the spec about conditions) has some articles which I'm too lazy to find pointers to right now. The chapter in the spec on conditions is perfectly readable to get a start in the right direction on using it. (In short, to get Java-like functionality, one can define errors using define-condition, "throw" them using error, and "catch" them using hander-case. This suffices to do a lot. Then one can learn to use the rest of the system to do all kinds of other miraculous and fascinating things :-) - Pitman's papers will probably give a better explanation than I.)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://170961]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others perusing the Monastery: (7)
As of 2024-04-16 17:04 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found