in reply to Re^2: An exploration of higher-level-language approaches to the "First all-zero row in an NxN matrix" problem (goto Considered Harmful)
in thread An exploration of higher-level-language approaches to the "First all-zero row in an NxN matrix" problem (goto Considered Harmful)
So where do we have an example of common coding practices that involve multiple entry points?
Around the time of Dijkstra's paper, computed gotos were both possible and quite common in several prominent languages--Fortran, COBOL, BASIC (not forgetting assembler of all flavours). Later, C's switch, which may superficially resemble an if/elseif/else cascade, is actually a computed goto and far more efficient, as only one condition is ever tested.
For many purposes, the computed goto -- in the form of C switch statement -- is still invaluable. Almost every generated parser consists of a nest of switch statements.
Any block that has more than one entry ... or more exit ... will be difficult to come up with a formal proof.
As a reason (for anything), this is (IMO) a crock. How much commercial code is ever formally proven? Effectively none.
Why?
There have been many attempts at producing computer languages that can be formally proven. Very few have ever heard of them and no one uses them. For very good reasons.
Guard statements, in the form of conditional early returns, nexts & lasts may be hard for the formalists to handle and reason about, but for practical programming, they can and do greatly simplify the handling of exceptional and boundary conditions.
Formalists don't like handling exceptions. Often this is because they don't have the formal notations for doing do. And when they do invent such notations, they are so complex and shrouded in mystery -- think Haskell's Maybe Monad -- that most people don't understand them.
Heck. Formalists don't even like handling such practical matters as integer size limitations and floating point inaccuracies. It is often possible to prove an algorithm correct mathematically, only for it to fail horribly and mysteriously when intermediate results are bigger than platform limits for integers or FP values. The classic example is the typical calculation of the mid point in many divide & conquer algorithms. Where mid = ( lo + hi ) / 2 can go out of bounds.
Ham-stringing the programmer to make life easier for the formal proofing that will never happen is simply bad reasoning.
|
---|