in reply to Overload abuse or is it just me?

Operator overloading is a notion originating from mathematics, where there are literally *infinite* numbers of things to write down, and only a few recognizable symbols with which to write them.

The idea is that if you re-define symbols at the beginning of your paper, and use them consistently, you can convey complicated ideas concisely for the benefit of people smart enough to be professional mathematicians. On the other hand, what you just wrote will to total gibberish to anyone else.

Notation can often get complex: one typically defines symbols from the greek and Roman character sets, but ocassionally the work gets so complicated that random doodles and squiggles get used, or symbols from other foreign languages (which can be a problem for people unfamiliar with the symbol in use, as in "No, that's a squiggle with a *DOT* over the duck, not a squiggle with a *DASH* over the duck! NOW it all makes SENSE!")

Fortunately, most computer languages use the convention that functions are defined at the *word* level, rather than the *character* level, allowing one to define functions with more than one letter. This is less concise, but often a lot more readable (which makes sense, because we have a lot more programmers than we have professional mathematicians).

In short, operator overloading is an old school mathematical hack, to save symbols, to ensure that the reader is familiar with the symbols, ensure that the symbols could be typeset on a roman keyboard, or even occasionally to try to make notations simpler (which rarely worked, then or now). For programming, we already have notations that work well enough; redefining them is about as silly as redefining PI to be the square root of two: possible, but just a bad idea all around. Unless there's a very, very, strong reason to overload things, just don't do it.

Just my $0.02,
--
Ytrew