in reply to Perl as one's first programming language

I think that static vs. dynamic typing should replace the memory management question.

Yes, a good programmer (and one working on low level) will need memory management, but nearly all modern languages (perl, all other "scripting" languages, java and all function programming languages) hide memory management from the programmer. With good reason, IMHO.

I think the more relevant question is if the programmer should first learn how to deal with static typing (like in Eiffel, Java or Haskell, or less rigorous in C/C++), or with dynamic typing.

I started with basic (which has no user defined types at all, iirc), and then learned C and Eiffel. With Eiffel I learned about the benefits and harm of static typing. I had no problem learning other statically typed languages (like java) and dyamically typed languages (mostly perl, but also scheme and a few others).

Since I can't repeat the experiment the other way round, I have to ask: Who has learned a dynamically typed language first? How hard was it for you to learn statically typed languages?

I think you should learn both, but which one first?

(As a side note I could imagine that learning perl first "spoils" you as a programmer, i.e. you never want to miss it's dwim'miness.)

  • Comment on Re: Perl as one's first programming language

Replies are listed 'Best First'.
Re^2: Perl as one's first programming language
by amarquis (Curate) on Apr 07, 2008 at 16:45 UTC

    (As a side note I could imagine that learning perl first "spoils" you as a programmer, i.e. you never want to miss it's dwim'miness.)

    It's funny, many people in the threads linked above said that back then, too. Reminds me of the great, recent I think Perl ruined me as a programmer.

    In 2002, I thought it silly. Perl is too good of a tool for the job, so start with something worse? What nonsense.

    Now, though, I'm not so sure. When I learned photography, I learned with an ancient 35mm Pentax, developing my own film and printing my own prints. It was a gigantic pain! My workflow was hours longer than it is today with digital. Sometimes I'd spend a day on some rolls of film and get nothing out of it.

    But, comparing myself with other amateurs who have been digital exclusive, I think I picked up many virtues. I no longer have to buy film, but I'm still careful about the composition of every single shot. I grew up without Photoshop, so I don't take shots I intend to fix later, I take the right shot now. Etc. etc.

    My comparison, did my own programming become more robust because I started out with a "pain in the rear" language? Today, I think so. The real question is whether or not it was worth the aggravation, though, and I don't have an answer for that. I'm better today for my experience, but what if it turned me off of programming entirely?


    On the subject of types, I do not recall which Perl-related slideshow I was watching, but one of the slides said something like:

    I want static typing!
    No, you don't. Remember the hell that was atoi()?

    I'm sort of undecided on the issue at the moment, but the annoyance that slide made me remember tips the scale in the direction of dynamically typed languages.

      I'm reminded of those who said that photography itself would destroy or devalue painting, or of those who said that the phonograph would destroy live music.

      There is only so much you can do with a digital camera, just as there's only so much you can do with a 35mm SLR vs. a treated copper plate in a 200mm box camera.

      There is also only so much one can wring out of Lisp, Perl, Ruby, Python, Smalltalk, ML or the other languages people say have spoiled them. Yet these tools are very useful for those looking to work quickly inside those limitations. Those who know assembly for an architecture (or who can customize the microcode for it) will always have certain advantages bought by taking on certain disadvantages.

        To use your photography analogy, I would suggest that to be a good photographer you need to have an understanding of certain basic skills which are common to painting, such as how to frame your subject for maximum effect. On top of that, you need certain domain-specific skills (a knowledge of how aperture and focal length interact, for example) and some trivial mechanical skills (which dial do I twiddle to set the shutter speed).

        To be a good painter you need the skills common to photography, plus domain-specific skills like how to mix paints and apply them accurately to the canvas, and trivial mechanical skills like how to wash brushes.

        Learning to program by learning perl will teach the trivial mechanical skills, and lots of perl-specific skills. However, if you want to learn the skills that are common to all programming, I think it's a bad choice. That's because perl does so much for you that you need to be doing really obscure or advanced stuff for that to matter so it's hard to teach them. Better to learn an assembler and C where it's much easier to learn that stuff.

        I don't mean to say that everyone should become an expert in assembler and C before tackling perl, just spend a week or two with them, so you get a basic understanding of what a variable really is, simple data structures like a linked list, how memory works, how loops and subroutines work, and so on. With those skills, the very hard problems that we claim perl makes merely difficult will be interesting instead of frustrating. Plus you'll *really* get to appreciate all that perl gives you "for free".

      I'm not sure which slideshow the above was in, but MJD has an excelent slideshow on type systems. He argues that the problem isn't really type systems in general, but the fact that C has perpetuated a broken type system, and that languages exist that do it much, much better.


      "There is no shame in being self-taught, only in not trying to learn in the first place." -- Atrus, Myst: The Book of D'ni.

Re^2: Perl as one's first programming language
by mr_mischief (Monsignor) on Apr 07, 2008 at 17:14 UTC
    I'm not sure whether static vs. dynamic makes as much difference as with and without side effects.
      There are many dynamically typed languages out there, and lots of projects are actually implemented in them.

      When I query my memory for side effect free languages only Haskell comes up. I'm not too deep in the function programming community, so there could be a lot more, actually. But nevertheless I think that the existing code base and current usage is heavily dominated by languages that allow side effects.

      So the question with or without side effects is currently more an academic one. It might be the next big question tomorrow, but IMHO it's not yet today.

        I agree with your reasoning and your conclusion. However, if we're talking about the first programming language for someone starting right now, the future is a good time to consider. Should we be preparing people to join the workforce with what's hot now or prepare them for the future?

        The great thing about side-effect-free languages is that they are much easier to parallelize. Since that's a key issue in the near future of programming even on PCs, it might be worth considering fitting the languages to the issue.

        Threads, multiple processes, and other explicitly parallel methods will obviously continue to have a place. Learning to program without side effects, though, teaches one to limit side effects in assignment-based languages. Objects actually can have a role to play in concurrency, since with proper planning and following certain caveats one can have side effects within the object so long as they don't propagate between objects.

        Since programs in most languages with little or no side effects can be made concurrent implicitly by the compiler and libraries, the new programmer doesn't even have to realize that it's happening at first, let alone why or how.

        The basis of the assignment based languages vs. the languages without side effects issue for a first programming language goes beyond mathematical purity and regularity of (often lack of) syntax. A big part is how much you want to tilt future programmers towards concurrency-ready practices from the start.