in reply to Re^4: Precedence design question...'x' & arith
in thread Precedence design question...'x' & arith

Seems to me that you are -- in some large part -- setting up strawmen to knock down. For example, in your first example in the parent node, you say:

001: 'a' x 3 => 'aaa' 002: 'a' x 3*2 => 'aaa' * 2 ==> error ...

and in your last example (from which I won't even quote your global indictment of Perl) you appear to be objecting to the failure of the snippet to DWIW (that's do what I WANT; do what I mean is a different critter.)

' ' x 3*2

Simple parens cure what you see as a problem requiring revision of the language:

C:\>perl -E "my $foo = 'a'; say 'a' x (3*2); say 'hey, goombatz!';" aaaaaa hey, goombatz!

Sorry, I may be convinced eventually, but so far, I don't think you've made your case.


If you didn't program your executable by toggling in binary, it wasn't really programming!

Replies are listed 'Best First'.
Re^6: Precedence design question...'x' & arith
by perl-diddler (Chaplain) on Jul 06, 2013 at 00:27 UTC
    The initial question I came on here to ask was for someone to show me the usefulness of the current precedence -- how are they relying on it or using it in any production or cpan code.

    I'm not trying to convince anyone of anything -- as I can't say "we should do this", unless no one can come up code where the current precedence is used/useful/required -OR- how a change in precedence as I have outlined would cause a problem.

    Parens are a sign of either a complex statement OR insufficient strength in the grammar. Did you check out the pdf I pointed to (the references he quotes in the pdf point back to HERE on perlmonks..). Oddly enough, he quotes perlmonks in discussing good Domain Specific language design, but only has examples in python, scala, ruby and smalltalk. Here is someone who was familiar enough with perl that he could quote erudite discussions on perlmonks, yet didn't use in any of his examples.

    Qualities of good languages are generalization : reduce concepts by replacing a group of more specific cases with a common case.

    Compression -- Provide a consise language that is sufficiently verbose for the domain experts. -- Specifically the goal being to "reduce the amount of expressions or to simplify their appearance while the semantics are not changed" ...et al...

    The point here being to simplify the language in a way that loses none of its semantics, but simplifying the expressions.

    So far no one has offered any examples of how this change would cause problems. It's not like I'm new to perl -- I've been using it since the early 90's when P-III was just being replaced by P-IV. It took me forever to change out of the P4 style and move toward P5... I'm not someone who is highly open to change. But neither am I for killing a language for the sake of preserving for as a future archival language.

      Parens are a sign of either a complex statement OR insufficient strength in the grammar.

      Now there's a false dilemma. Unless you enforce a strict left-to-right evaluation order like Smalltalk, you have to choose how to break ties between different operators at different precedence levels. Even when you have operators at the same precedence level, you have to decide which executes first, depending on associativity and arity and the like.

      Without a single hard and fast expression evaluation order enforced by the grammar, you're going to have to make tradeoffs. Parentheses are also a sign that these tradeoffs aren't always right in every case.

        You are right that you have to break ties.

        But we are talking about operators for 2 different data types.

        If one writes:

        " " x $indent*$spaces_p_tab . 2+3 ." + " . 4+6 . " == ".3*5
        Wouldn't it make sense to do the number operations first then combine their string representations with the strings?

        Since if you do the string operations first or give them equal precedence with the number operations, you end up with an invalid result. Why not do the evaluation that naturally gives you a valid result rather than choosing a rule to make it wrong?

        I would say that humans tend to group like with like and that a well designed parser would follow that premise over one that makes humans wrong.