Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^2: Some Insights from a Traveler Between Languages

by skyknight (Hermit)
on Apr 23, 2005 at 20:45 UTC ( [id://450806]=note: print w/replies, xml ) Need Help??


in reply to Re: Some Insights from a Traveler Between Languages
in thread Some Insights from a Traveler Between Languages

I appreciate its terseness as well, but it's a doubled-edged sword. Also, it's not clear to me why you are citing homographs as a defense. To me, it seems that homographs are a misfeature of natural language, resulting from the fact that natural languages are the product of clumsy evolution and combination over the course of eons. Clearly it would be better if languages didn't have homographs as it makes life more difficult needlessly. Yes, we can handle them, but they don't buy us anything, so why should we use that kind of thing as an argument when deliberately designing artificial languages?

Replies are listed 'Best First'.
Re^3: Some Insights from a Traveler Between Languages
by TimToady (Parson) on Apr 23, 2005 at 21:21 UTC
    Well, it's easy to come up with egregious examples and show how English could have been better designed, but you overgeneralize. The fact is that we rely on multimethod dispatch all the time in any natural language, and it's just a minor lexical miracle that you don't even notice that you're using homophones with different meanings:
    The chicken is ready to eat.
    The children are ready to eat.
    In short, you're relying heavily on MMD yourself when you use overloaded words like:
    appreciate well clear product clumsy combination course as makes life handle buy should use kind argument
    MMD is useful because it lets you express metaphorical correspondences. To trot out the Inkling's favorite example, a "piercing sweetness" may be neither piercing nor sweet, literally speaking, but your MMD dispatcher is even smart enough to autogenerate missing methods and dispatch to them.
      Yes, we humans do manage to deal with the vagaries of natural language, but I don't really see this fact as being a good defense of similar issues cropping up in artificial languages. Why do we have programming languages for developing software at all? It's because specifying the solutions to engineering problems is damned near impossible in English, at least when it comes to the nitty-gritty details. We need programming languages for the precision with which they allow us to specify the operation of systems. Anything that goes against this end ought to be considered a misfeature.
        skyknight wrote:
        Yes, we humans do manage to deal with the vagaries of natural language, but I don't really see this fact as being a good defense of similar issues cropping up in artificial languages. Why do we have programming languages for developing software at all? It's because specifying the solutions to engineering problems is damned near impossible in English, at least when it comes to the nitty-gritty details. We need programming languages for the precision with which they allow us to specify the operation of systems. Anything that goes against this end ought to be considered a misfeature.
        I would ask a different question: why do we have multiple programming languages at all? Rather than one, clear, precise way of specifying an algorithm, we have hundreds of them. Many people seem to think that this makes some sort of sense, that different languages do a better job on different problems and/or with different people.

        Would there be any point in having multiple programming languages if all of them were essentially the same with only minor syntactical variations between them?

        The opinion that computer languages should be based on some concept of mathematical elegance is pretty common... perl and perl alone is pursuing a different path, focusing on "expressiveness" in analogy with natural languages.

        My personal opinion is that there isn't really any proof whatsover that one approach is better than the other: demonstrating that "mathematical elegance" is best would require some very difficult social science experiments, and the guys who are proponents of "mathematical elegance" not coincidentally just want to do mathematical proofs. So instead people fall back on introspection, and it seems that some people like one way, and some like another.

        Some issues to consider (though they might be side issues):

        • The standard of writing (books and documentation) in the perl world is very high... programmers who like thinking "linguistically" also make eloquent writers?
        • The pearl at the center of perl culture is the spirit of collaboration: CPAN, perlmonks, comp.lang.perl.*, and so on. Is there a reason that this particular language has inspired this?
        • Occasionally the "mathematical" crowd take a stab at designing a language to replace "natural language". The result never seems to catch on. Look up Loglan/lojban. And maybe: "sapir-whorf", "general semantics", and "babel-17".
        I don't really see this fact as being a good defense of similar issues cropping up in artificial languages.

        The reason it is a "good defense" is because it shows that context can successfully resolve a lot of ambiguity that would otherwise have to be resolved in different ways. The advantage in artificial langauges is the same as it is in natural languages. It makes a lot of things easier to say.

        We need programming languages for the precision with which they allow us to specify the operation of systems. Anything that goes against this end ought to be considered a misfeature.

        If being explicit and long-winded is your cup of tea, that's fine. But it certainly doesn't seem to be part of the Perl culture, and I don't see it changing drastically any time soon. If you think that results in a loss of utility and is the source of misfeatures, well, all I can say is I disagree.

Re^3: Some Insights from a Traveler Between Languages
by Joost (Canon) on Apr 23, 2005 at 21:05 UTC
      By the way, is there difference between a homograph and polymorphism?

      Several. For one thing, a homograph is not necessarily also a homophone (though it can be). Also, the different versions of a homograph may be different parts of speech in some cases; as far as I am aware, polymorphism keeps the polymorphic thing as the same part of speech. Furthermore, the idea behind polymorphism, if it's done correctly, is that there is supposed to be a logical connection or parallel between the different versions of it, a way in which, although slightly different, they are "the same"; this is sometimes botched, but it's *supposed* to be there; homographs have no such qualms.

      Homographs like "wind" and "fly" and "record", wherein the meanings are related, are not the nasty ones, IMO. Situations like "that" (which is used as a relative pronoun, as a demonstrative pronoun or adjective, or as a subordinating adverb, and, worst of all, is frequently elided) are the rough ones. Perl as far as I am aware does not have any such pitfalls as that; even the weirdness surrounding pieces of punctuation (notably, commas and curly braces) has nothing on "that".


      "In adjectives, with the addition of inflectional endings, a changeable long vowel (Qamets or Tsere) in an open, propretonic syllable will reduce to Vocal Shewa. This type of change occurs when the open, pretonic syllable of the masculine singular adjective becomes propretonic with the addition of inflectional endings."  — Pratico & Van Pelt, BBHG, p68
      Polymorphism, in my mind, refers to the ability for variances in behavior to occur under the hood. Perhaps the best example of this is virtual methods in C++ (or, in fact, the way all methods are invoked in Java). When you have a statement that invokes such methods, and you're doing it on a pointer of a base class type, you have one chunk of code that results in myriad different operations. A homograph, I think, is a bit different. In the case a homograph, you actually need to know what is happening and you have to read the entire sentence to figure that out. In the case of polymorphism, those differences are hidden to you, and in this particular context you are indifferent to them.
        I was actually having something a bit broader than C++ virtual methods in mind. Sorry about the fuzzyness. What I was thinking about is that in OO programming, you can use the same method name in multiple unrelated classes that might or might not be involved in polymorphism.

        Now, in C++ you still need some kind of type specification for variables, but for Perl or Ruby (or even Java, if you don't mind a lot of introspection) you don't have to, which implies that the actual meaning of the "word" (method name) is entirely dependent on the context. You don't even need OO to achieve this: importing a method from another namespace, or including C header files basically boils down to the same thing.

        Sure it can cause confusion, but can we actually do any better? Human language really is much more adept at disambuigating short words that might have multiple meanings vs reading reallyLongDescriptiveOnesThatGetIncrediblyDifficultToTypeRightAllTheTime. :-)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://450806]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others drinking their drinks and smoking their pipes about the Monastery: (5)
As of 2024-03-29 13:42 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found