Beefy Boxes and Bandwidth Generously Provided by pair Networks
Pathologically Eclectic Rubbish Lister
 
PerlMonks  

comment on

( [id://3333]=superdoc: print w/replies, xml ) Need Help??

The presumption here is that those languages that seek to restrict the idioms and modes of use to some subset of those possible, can and will force the programmer to produce more correct code with less bugs.

History does not support this presumption.

Whilst compile-time type-checking, declarative method and attribute visibility, compile-time bounds checking and a host of other compiler enforced restrictions will often allow many trivial source code and logic errors to be detected early, these are often the same errors that would be detected early anyway, whether through rudimentiary unit or system testing.

Some other diminutions of programmer responsibility can pragmatically reduce the scope for programmer error, especially in those areas that require the programmer to remember to perform certain, often rote, mechanical actions across functional boundaries. Eg. Memory management. Cleanup of memory is often called for at points in the code well away from the allocation of that memory. This is true for procedural code and even more so for OO code. This type of responsibility often requires a quite long-term overview of the flow of the code, and the life of the allocated objects. And it is often well divorced from their instanciation and the main points of use. These kinds of responsibilities benefits from the computers ability to perform rote actions reliably and rigourously. Hence, automated memory management (GC) is usually a big win.

However, many other areas of a programmers skills are less ameniable to automation, and in many cases, those types of compiler enforced restrictions can actually inhibit the programmer from pursuing and achieving elegant solutions.

For example: I've been trying to get to grips with FP languages. These mostly have a strong element of type safety. This is usually seen as a good thing, and when writing code that fits into "known algorithms", I would agree that it does, but there is a downside. It tends to also enforce a bottom-up approach to coding!

It is very difficult to write top-down code where, in the early stages, many elements of the solution one is seeking are unknown. With a relaxed language like Perl, it is very easy to "mock-up" the lower levels of a program as you don't have to specify the types, or numbers of the parameters of functions. Indeed, you don't even have to write the functions (methods) so long as they never actually get called. This can be a great advantage as it allows you to use a very pragmatic approach of "at this point in the code I need to determine if xyz() is true. So you simply code that. If the logic of the code means that xyz() never actually gets called, then you do not even have to code xyz().

This allows you to start with the top level premise of the code and, what basically amounts to , pseudo-code your solution, in a top-down manor. Often, you will never get around to coding xyz() because having mocked up the top level of the code and "proof-read" it (with perl -c prog), you realise that the logic of xyz() is better incorporated into pqr(); or that there is no way that you will have access to the parameters that you would require to determine xyz() at the point in the code where you coded it; or that actually there is no need to determine xyz(), because it's results naturally fall out of logic that precedes the point where you need it.

I arrived at a maxim about 15 years ago that has stood me in very good stead ever since:

Define top-down; Refine bottom-up.

Start with what you know: Available inputs and required outputs and define--as a function--what you need to transform the former to the latter. Within that, repeat the same strategy.Decide what you need to produce the return value and what forms your inputs at that level. And so on down.

The beauty of Perl, is that it makes doing this very easy. The absence of parameter and return typing mean that you can easily mock up a function that takes nothing and returns a (hardcoded) something that is exactly what the calling code needs.

sub doSomething{ return 'Whatever's needed'; }

This truely makes for rapid prototyping. Defining functions is so cheap that is encourages you to do so even if you are 95% sure that you will later discard it. And once you know you will keep the function, you can, slowly, refine the constraints that you impose upon the parameters, as those constraints become clear. And, if the application warrents it (ie. It's more than a one-time script, or the demonstration of the prototype incites further investment of time and/or money), then you can refine the constraints, bottom-up, to achieve the level of reliability commensurate with the application.

Of course, how much energy will get expended on that refinement depends upon two things: The thoroughness of the programmer; and the perceived value of the time expended.

When it comes to "trusting the programmer", it is just as possible to do only the minimum required to satisfy the compiler in an 'enforcing' language, as it is in a relaxed one like Perl, and the assumption that code that meets the minimal requirements of the enforcing language is "good enough", is a very bad assumption.

Even with the best will in the world, the programmer can just as easily believe that he has understood the requirements of the application, when s/he has actually misunderstood them, when using an enforcing language, as when using a relaxed language. Indeed, I would say that the enforcing language has the ability to delude both the programmer and his customer into believing that if it compiles, it must be correct. This is, and will always be, a bad assumption.

For the sake and sanity of both parties, the only thing that can be trusted, is an accurate specification and correct tests of complience with that specification. Any other criteria for judging the correctness of code will always be subject to debate, misinterpretation and misuse.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.

In reply to Re: TMTOWTDI... and most of them are wrong by BrowserUk
in thread TMTOWTDI... and most of them are wrong by tlm

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post; it's "PerlMonks-approved HTML":



  • Are you posting in the right place? Check out Where do I post X? to know for sure.
  • Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
    <code> <a> <b> <big> <blockquote> <br /> <dd> <dl> <dt> <em> <font> <h1> <h2> <h3> <h4> <h5> <h6> <hr /> <i> <li> <nbsp> <ol> <p> <small> <strike> <strong> <sub> <sup> <table> <td> <th> <tr> <tt> <u> <ul>
  • Snippets of code should be wrapped in <code> tags not <pre> tags. In fact, <pre> tags should generally be avoided. If they must be used, extreme care should be taken to ensure that their contents do not have long lines (<70 chars), in order to prevent horizontal scrolling (and possible janitor intervention).
  • Want more info? How to link or How to display code and escape characters are good places to start.
Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others meditating upon the Monastery: (7)
As of 2024-04-19 08:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found