Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

"Practices and Principles" to death

by ack (Deacon)
on Feb 29, 2008 at 06:50 UTC ( [id://671115]=perlmeditation: print w/replies, xml ) Need Help??

I was just reading BrowserUk's Meditation Node "Testing methodology, best practices and a pig in a hut (Meditation Node #670478) and the various replys and it got me to thinking about a growing concern of mine relative to my industry, buidling experimental satellite systems.

The problem is this: we have, over the roughly 30 years that I've been in this industry, evolved such a plethora of "best practices" and "policies" and "processes" to try to improve the reliability of our satellites that we are being crushed under the cost and effort required to build any given satellite these days.

But the most disconcerting and frustrating part of it is that we have a small group of people who have invested roughly the last 15 yeras in trying to ween ourselves of all of those overburdening issues and have found that it is possible to produce our systems with adequate reliability by just 'THINKING' and only using those things that are absolutely necessary.

That includes only producing that documentation that is truly needed to get the job done...and then only producing the documentation in the form that is directly useful to thos that need it (including, usually, handwritten in engineering notebooks).

Surprisingly, we have been developing systems for 1/3 the cost that everyone else does and have maintained the reliability at levels at or above those that still cling to the "evolved ways" of the rest of the community.

I especially like the replys to BrowserUk from amarquis who noted:

"(speaking of the value of testing amarquis writes) Preventing 90% of issues is fairly easy. Preventing 99% is hard. 99.9% is incredibly hard, and so on and so on. Obviously, you have to stop somewhere. And to decide where exactly to stop, you have to sit and think what the real cost of failure is. Will a small fraction of customers be driven away by the bug? Will embedded systems need to be recalled? I think that everybody goes through this "How good does it have to be, how much effort will it take to get there" evaluation when thinking about a project."

I also like the reply from an Anonymous Monk who wrote:

"You can lead a monk to knowledge, but you can't make one think. Some will think. Some will not. I personally think that a discussion of best practices/principles for testing or anything else should begin with encouragement to think. For those who have not the inclination to think, the capacity to think, or the experience to think clearly, a list of rules is better than no rules at all."

What I see in my industry (and in most any endeavor to create new things...e.g., in almost every aspect of programming) is that it is not just that people don't think (or know how to think)...it is an almost paranoid fear of failure with the result that if they happen to fail then they'd rather endure an ever increasing load of "practices" and "processes" than to have taken the chance to "think" and go against the community's evolved "best practices."

And as I look at our evolved "best practices" I see that they have all evolved from a long series of failures (some minor...some not so minor). Each failure prompted all of the leadership to say "What went wrong? Let's form a new policy (or practice or process) to make that particular failure isn't repeated." And so another layer of policies, practices, and/or processes is added to the list. Like silt settling over the carcasses of dead dynosaurs at the bottom of the lake, over thousands of failures we end up with such a crushing weight of "processes" and "practices" that we get a lump of coal.

And the very worst of it is that the entire community bands together in their shared fear and, like the reborn bodies in the "Invasion of the Body Snatchers, scream out horribe accusations, obscenities, and calls for public stonings whenever anyone tries to do things differently...tries to "think" (as noted by the Anonymous Monk in his reply to BrowserUk).

At the core of the dilema (I think I misspelled that. Sorry) seems to perpetually be "testing". And the rest of the replys to BrowserUk's node focused on that very topic...I guess that's what got me to thinking.

Has anyone else found that growth of "common practices", "best practices", "policies" to rediculous levels in their jobs? Hopefully not to the level that I have experienced it. But I am curious if others (especially those that have mediated on BrowserUk's node) have seen and had to deal with it and, why in your opinion, so many people collectively would rather move towards out-of-control policies, practices and processes rather than think.

ack Albuquerque, NM

Replies are listed 'Best First'.
Re: "Practices and Principles" to death
by chromatic (Archbishop) on Feb 29, 2008 at 07:03 UTC
    And the very worst of it is that the entire community bands together in their shared fear and, like the reborn bodies in the "Invasion of the Body Snatchers, scream out horribe accusations, obscenities, and calls for public stonings whenever anyone tries to do things differently...

    Exaggerate much?

    I've seen many people stupidly take good ideas too far. That doesn't mean they weren't good ideas. That means people sometimes do stupid things for stupid reasons, despite (or because of) policies that try to prevent people from using their judgment.

    If I found something more effective than test-driven design, I'd use it. Someday I might. I'm flexible. If it doesn't work for your team and you made a conscious and well-informed decision to use something else, good for you.

    If you're going to tell me that I'm leading people down a crushing path of deep darkness and despair because... well, I can't give you any specifics, but I will say that there's a string eval or two in Test::Builder... then I might argue back.

    Them's the breaks, kiddo.

      Wonderful points. Yes, I do tend to exagerate in my excitement. I try to curb it, but it seems to poke through when I least want it to.

      With regard to test-driven design: I absolutely would never argue against it...or any other useful and productive process...especially with respect to testing. Testing is absolutely necessary and worthwhile. There is, for me, no "crushing path of deep darkness and despair"...testing is enlightenment. So you'd be absolutely justified in challenging (and probably crushing) my argument(s) if that was what I was saying.

      All I was intending to reflect is that it seems that when it comes to testing, I see so much emphasis placed on things like metrics for metrics-sake, test suites that are "required" to be run even when the applications have long since outgrown the utility of those tests. But we are still required to do them because they are part of our policies that, themselves, never get questioned.

      I applaud...and demand...good, meaningful and complete (to the extent that we can make it 'complete') testing of our systems. And getting testing that good takes a lot of thought and experience and desire to produce a successful system. Figuring out how to achieve that without 'breaking the bank' as they say makes it more of an art, in my opinion, than a science. Having tools like the Test::* modules makes it much more effective and meaningful...not the least of which reason, in my opinion, is that it frees us to do more thinking.

      One of the points that I failed to mention in my original node was that we use Perl as our language of choice to do and control all of our systems testing; it allows us to quickly develop and implement tests freeing our engineers to have much more time to think about what should be tested rather than focusing on cookbook prescriptions from the community's collective policies (and in my business we have so many policies that we've invested substantial amounts of money in systems to to ensure that we don't miss any of the 'policies'). It also allows us to automate the entire testing process; resulting in much faster and repeatable testing.

      But we get an almost non-stop criticism from throuhout our industry for even using Perl...because it's not part of the 'standard way of testing systems'.

      Anyway, your response was wonderful and I am sorry for leading you (or anyone else) to think that I was railing on testing or any of the Test::* modules...I have used several (though I tend to return to Test::Simple for most of my needs) and all have been most useful and helpful.

      ack Albuquerque, NM
        All I was intending to reflect is that it seems that when it comes to testing, I see so much emphasis placed on things like metrics for metrics-sake, test suites that are "required" to be run even when the applications have long since outgrown the utility of those tests. But we are still required to do them because they are part of our policies that, themselves, never get questioned.

        I agree. Many of the successful projects I've encountered regularly stop and ask themselves "Wait, is this thing we started to do a while back actually working? Is it helpful? Is it valuable?" If it's not, they stop doing it.

        We should encourage people to reflect on their practices and their efficacy and to revise their processes based on that feedback.

        I'm not sure bounding in like a bungee boss and saying "I'm here to challenge the status quo! The prevailing wisdom doesn't always work!" is the way to do that, which is why I responded to BrowserUk so strongly.

    A reply falls below the community's threshold of quality. You may see it by logging in.
Re: "Practices and Principles" to death
by olus (Curate) on Feb 29, 2008 at 12:31 UTC

    One designs tests in order to guarantee certain levels of quality for their products and I can see how important a high level of quality must be implemented on projects building satellites. Should anything go wrong and the product is lost, or the ratio of the cost of making repairs compared to the cost of preventing points of failure being to high.

    Given the amount of processes you said to have, there will be a risk management office and a quality management office. I'm guessing the risk management office, among other things, will do some calculations on the Earned Monetary Value for situations where strategies of avoiding or mitigating risks are compared to doing nothing. Based on that, doing a lot of tests may be seen as being really cheap compared to failure events.

    Documenting situations for what went wrong is of extreme importance for making sure it won't happen again. When doing a new project one will not forget to take some measures on matters that may be similar to previous ones. So 'lessons learned' from previous projects helps in improving quality and reducing risks for new projects. Also in preventing occurrences where money would have to be spent to implement contingency plans or workarounds on problems.

    The objective of having all those processes is to lower the cost of projects. Strategies for quality have, of course, associated costs. Is it worth to incur in such costs? Well, it depends when compared to the costs incurred when having your products fail.

    One necessary process for the project is to revise the documentation and identifying what is applicable to the current project, as there may be cost and time consuming activities that bring no value.

    All those processes are invaluable as they help on identifying total project cost to present to your clients. Either they accept them or they don't. But you know what it will cost to the company to do something for the client. And you can decide whether the project is an opportunity or a financial disaster for the company.

      I think I understand what you're talking about. At the roots of what I think you're saying, I would say that I have no particular contest.

      Though I interpret what you're saying as being one of trying to get me to consider the issue from the 'buyer's' (or customer's) point of view: I am the buyer/customer of the systems and my job, as Chief Systems Engineer is to advise our Program Managers who hold the checkbooks, on what and how we expect our suppliers to produce the systems we are responsible for. I am not meaning to be at all critical of what you were saying; I do think your points are largely correct. I just want to be sure that you understand that it seemed like you may have presumed that I work for a supplier and might not appreciate the customer's point of view.

      But to the excellent points you presented, my problem is that all of those analyses and evaluations and considerations that you spoke of actually rarely occur. Often because we can't find anyone that can actually figure out what went wrong since when we loose an asset it isn't around for us to do a post mortem on...it's either in the ocean, or blown to bits when destroyed during launch, or is floating around in space). In addition, it seems, we have too few 'experts' to do all the things that you pointed out.

      So, failures happen; but way too frequently noone trys to (or can) really get to the 'root cause'. Instead, tiger teams of people try to imagine what 'might' have happened, and then try to determine what testing would have proven or disproven that such a hypothesized cause would occur. Those tests then become: first, required tests for all future projects and second, folklore that noone seems to remember why they were even dreamed up.

      For what it's worth, the same pathology shows up in most every aspect of our systems productions (concept formation, system requirements production and derivation, system design and production, testing, and on-orbit operations). My focus was on testing because, for our particular work, it is the area most malignantly infected with the pathology: the tendancy to substitute 'do what we always do' for 'thinking'.

      With respect to your comments about 'cost-benefits' and those types things...they are, for me, certainly nobel goals and SHOULD be what is being considered. But I know of no real case where anyone has done any such analyses. They seem to just try to reason that such analyses SHOULD validate the mass of processes, practices, and policies...but noone (to my knowledge) seems to ever do so. What I see is that they follow their 'SHOULD validate' with a very loud proclamations that almost quote what you wrote. In your case I believe it was meant with true constructive observation; the proclamation litanies, on the other hand, seem all too often to be just a smoke screen (hence, I suppose, the cause for part of my initial reaction to what you had written).

      In fact, when I talked about what we've (over the past 10 years or so) hae been doing to try to change it (i.e., to try 'thinking' instead of just 'doing it because that's what we've always done' has resulted in producing our last 10 satellites with demonstrated reliability...based upon number of systems fielded compared to the number that have failed...that matches or exceeds the last 10 to 12 produced in the 'traditional way'. And we have produced them for total costs that are between 1/3 and 1/4 of the 'traditional' systems' costs and we have produced them on on a schedule that is getting quicker (rather than the 'traditionalists' whose schedules are creaping towards longer) typically 2-3 years compared to the 5-7 years of the 'traditional ones.'

      So based upon that evidence (which I would, of course, consider somewhat anecdotal...I think I misspelled that, sorry...since the analyses that you suggested have not been done on our systems either) I would have to say that at face value the analyses you suggested would likely end up arguing against the 'traditional' approach.

      Of course your observation that in my business the loss of a system is very high...many tens of millions of dollars...is absolutely true; so you are definitely correct in saying that the amount, focus, and considerations for testing are heavily influenced by that consideration.

      But at the end of the day, no matter what the costs involved, I think we should still be working to be sure that we put 'thought' into our testing and be frugal but responsible and accountable for ensuring that the testing that we do is what is really needed...not just because thats-how-we've-always-done-it.

      That's my thesis...my story...and I'm sticking to it.;)

      ack Albuquerque, NM
Re: "Practices and Principles" to death
by jdporter (Paladin) on Feb 29, 2008 at 13:34 UTC
    only producing that documentation that is truly needed to get the job done...and then only producing the documentation in the form that is directly useful to thos that need it

    Coincidentally, just yesterday castaway posted this reference in chat: TAGRI: They Aren't Gonna Read It.

    A word spoken in Mind will reach its own level, in the objective world, by its own weight

      Oh, my! I REALLY like that!

      I think I just found a new acronymn for my lexicon!

      Thanks jdporter...and castaway!

      ack Albuquerque, NM
Re: "Practices and Principles" to death
by dragonchild (Archbishop) on Feb 29, 2008 at 14:56 UTC
    A few thoughts:
    • Shit happens. There is absolutely no way to prevent all badnesses, period. That's why we have insurance.
    • Failures sometimes happen through lack of enforcement, not lack of procedures.
    • The procedure that requires a new procedure for every failure is, itself, a failure.
    • If the loss of a single satellite is such a major disaster, then maybe making satellites should be made cheaper. I personally like working in industries where a 1-5% failure rate is not only expected, but hoped for.

    My criteria for good software:
    1. Does it work?
    2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
      If the loss of a single satellite is such a major disaster, then maybe making satellites should be made cheaper.

      How? You can't scale that very well.

      When you sell an electronic device, you can drop the cost by selling 100x more items.

      If you do that with satellits, and put them in orbit, the leftovers of the old satellites will bombard and destroy your new ones.

      And even if making a satellite can be made cheaper, it still costs $huge_amount to put them in orbit.

        (Note: you're entering into an area that I have a lot of interest in and have thought about for years, so please be prepared to back yourself up.)

        You're presenting very real issues. Sounds like there's a number of excellent industries waiting to spring up. For example, garbage-collecting dead satellites and other debris. No, I have no idea how this would be done. Sounds like the perfect use for a field of some sort on a drone sweeping around the earth in an orbit that traverses the entire spherical shell of a given height. If all known satellites are in a database somewhere (another business opportunity similar to credit bureaus), there isn't a problem. If finding these satellites is a problem, that's another business opportunity. In other words, the cost of managing of all these items can easily be handled by entrepreneurs in the free market.

        Now for lift costs. This is an interesting problem because it makes a lot of assumptions that may not be valid for more than a few years. So, let's talk about this.

        Lets look first at what cannot be changed - the amount of energy it takes to raise the potential energy of a certain mass such that it is in LEO. That energy needs to be applied to the mass in such a way that it raises the potential energy without damaging the mass. The way that has been done thus far has been to use rockets which are extremely inefficient. And, they're more inefficient the closer to the ground you are. If we could only start our rocket halfway up, we would cut our energy needs by 75% (Inverse Square Law). There are easily a dozen solutions here, but they all have a rather high capital cost. Amortizing that cost is the key.

        Now, why do we have to build satellites on Earth? Why can't we build them in orbit? If we could do so, we wouldn't have to make them so sturdy (to survive liftoff), which means they would require significantly less material. Since you'd have a foundry in orbit, you probably have power generation in orbit. Why not share that power generation capacity through beaming (a proven, if unused, technology)? Now, all you need is the actual purpose of the satellite. A lot easier to work with.

        Furthermore, why do we have to have people in orbit to build these satellites? The cost of the ISS would drop by about 90% if it didn't need people on it. I'm not advocating a human-free space exploration program. In fact, I'm not advocating a space exploration program at all. I'm coming from the perspective of a space occupation program.

        Basically, you find that the marginal cost of a given product (such as a satellite) drops dramatically if the proper infrastructure is in place. Very much like Perl when it comes to programming. I know you've discussed how productive you are in Perl vs. other languages in the past. That's due to the infrastructure you have. That infrastructure cost over 1_000_000 manyears (counting CPAN), but has been amortized into saving that many man-years every year. That's all that's needed in space, too.


        My criteria for good software:
        1. Does it work?
        2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?

      You are, IMO, a very insightful and wise monk!

      You just made my day...

      And may have just given me a couple of new "Axioms of Systems Engineering" which I have been working on from my 30 years in the business...if you give me permission...and I promise to always recognize you as the author.

      On the other hand, maybe they're too much for the mere mortals that I work with to handle. ;-)

      ack Albuquerque, NM
Re: "Practices and Principles" to death
by zentara (Archbishop) on Feb 29, 2008 at 16:23 UTC
    Preventing 99% is hard. 99.9% is incredibly hard..... you have to stop somewhere.......decide where exactly to stop, you have to sit and think what the real cost of failure is

    Think cost of insurance!

    From my engineering school back in the 70's( things may be more precise now), 3 decimal places of accuracy was considered as good as you could get, because 4 decimal places involved estimating so many variables(especially temperature), that it would be fruitless to attempt.(also we were limited by sliderule accuracy :-) )

    My point is that you only need to test to a level so that a court won't find you negligent if there was a failure. After all, money is what it is all about, being sued for negligence in a failure is what you need to avoid.

    So it seems that in this day and age of insurance for everything, you would test to a level that makes the cost of insurance still acceptable. If testing to 99.99% saves you 100,000 on insurance, but requires an extra year of a team of programmers working, is it worth it?


    I'm not really a human, but I play one on earth. Cogito ergo sum a bum

      That's true (Especially for contract workers), but I think that the advice is more general. It doesn't matter what specifically the costs of failure are. They may be lost sales to unhappy customers, they may be bad reviews/press, or even insurance or being sued.

      What matters is that you sit down before trying to create a system, you do good old fashioned cost/benefit analysis. What techniques get your team from point A to point B, and leave you with the most resources at the end of the day? Maybe automated Test::* based design is the way to go. Or, maybe the benefits of that method outweigh the cost of both training your team to implement and actually implementing it. You know your people best, and therefore you are the only one equipped to make that choice.

      I think that BrowserUK and I disagree on the usefulness of Test::*. Maybe it is because he's a more experienced programmer than I and it just represents overhead for him. I don't know. I think we do, however, agree on the fact that you need to think about what you are doing and whether or not it makes sense to you, rather than burning a pig in a hut/cargo culting/whatever you want to call "pick whatever is popular and just do it."

        I grew up in Detroit, and the big problem back then was "how much money do you put into highway safety". Did you know that with enough money, you can make a 99.99% safe highway? But would they spend it?....no... bad publicity be damned. What it ultimately came down to, was the cost of an human life. They could compute all the losses due to insurance companies paying out death and accident costs, and it was cheaper than making the highways safer. Same with the Pinto gas tank. Remember them exploding? But it was cheaper to payoff all the people sueing, rather than fix the problem.

        So nowadays it comes down to what the value of a human life is, or for that matter, the value of a corporation. Nowadays, it's quite common for a corporation that gets huge bad publicity, to declare bankruptcy, and reform under another name.

        Since this touched on Sattellites, and space, the issue is definitely on the table....... what is the value of an astronaut's life?


        I'm not really a human, but I play one on earth. Cogito ergo sum a bum
      There's more to failures than monetary costs - reputation loss, as ack wrote, and there's no insurance against that.

      I'm not into space stuff whatsoever, but recently I've been working for a manufacturer of satellite equipment. They produce amongst other things power amplifiers for satellite emitters - those beasts that produce 400W worth of transmission energy.

      They explained to me that whilst most if not all satellite components are redundant and connections can be routed inside the satellite to overcome an outage, their equipment must perform 100%. Not because of monetary failure costs - they have insurances anyways, I guess - but because of the reputation loss it would mean if a newspaper ran the line "Satellite outage due to power amplifier failure produced by company XY". They could look for other things to produce, then...

      --shmem

      _($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                                    /\_¯/(q    /
      ----------------------------  \__(m.====·.(_("always off the crowd"))."·
      ");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
Re: "Practices and Principles" to death
by Herkum (Parson) on Feb 29, 2008 at 16:57 UTC

    The thing I have found that people tend towards extremes, they believe that you have to test for everything or they test nothing.

    The people who test everything believe that they can prevent anything bad from happening by writing enough documentation or produce X no. of tests or X amount of code coverage.

    The problem is that testing and documentation becomes the objective. People who delve deeply into the details they lock themselves to a particular process/solution. You cannot change your design because it would involve changing your documentation and your tests and that requires a lot of effort.

    The other group believe they are infallible and that testing and documentation is extraneous, unimportant and don't bother with it.

    The problem is that people make mistakes, and these mistakes are found only when it is sent out to the customer.

    It takes some thought and work to find a good middle ground, but most places don't really work to do that.

      The thing I have found is that Herkums tend towards extremes. They believe that those who rarely test believe themselves infallible. And they believe that those who test extensively believe that testing can prevent any problem. :)

      I don't think I've ever met anyone who thought their coding was infallible (certainly not for very long) nor that thought that their testing was infallible.

      Most of the coders I've met who did little testing (all coders test) had some awareness of some of the potential benefit for a more rigorous approach to testing. They never allocated the time to make enough progress on creating and adopting that more rigorous approach yet.

      Most of the coders I've met who did a lot of testing were very much aware of the fact that their testing was imperfect. They would, when practical, spend some of their time budget trying to improve the efficacy of their testing (certainly not always by increasing the number of tests) and reducing the costs of testing. With a limited time budget (which I find is the rule, even when it isn't imposed upon you by your boss), there are always trade-offs to be balanced.

      The horror stories in these threads don't sound to me like a problem with programmers believing too much in testing. They sound to me like organizations having developed too much / too rigid of a bureaucracy.

      I have been lucky to have only worked for very brief periods of time in companies that were so large and so old that they had huge, rigid bureaucracies. This is in part because I don't apply for jobs at such companies and also because I don't stay very long at jobs in such companies after the small company that I was working for was purchased by the huge, old company.

      My brief stays have shown me that appealing for change to the bureaucracy is daunting and very nearly pointlessly doomed. The rigid, structured hierarchy of rules is augmented by a rigid, structured hierarchy of employees with rigid, structured vested interests. The only hope (nearly) is to fish out some nuggets of worth (either code or people) and transplant them into different location(s) lacking the accumulated encrustment. (This is true even when this is attempted within the umbrella of that company.)

      And I also don't think that the most significant problem is that people don't "think". Certainly that is a fairly common problem, though, to a small extent for most people and to a larger extent for many people.

      The more significant problem (in my opinion, with regard to the problem scenarios touched on in these threads) is that most people don't believe that they can influence the existing power structure so they don't try (and a few people just go about it very badly and so their attempts fail).

      People buried under excessive testing requirements certainly do "think". Day in and day out they think "why do I have to do all of this nonsense?". They even identify particular parts of their job that seem the most pointless, that produce the least "bang for the buck". That is a very valuable piece of information to identify. Too bad they (mostly) never do anything useful with that information.

      This is very much like the people who show up at PerlMonks (both "regulars" and first-time callers) asking questions that sound like "is there a type of hammer that works better for hammering in dry-wall screws?". Eventually the denizens coax some explanation which is that "the boss/client heard that nails are inferior and won't let us use them / refuses to buy a power screwdriver" and therefore "there is nothing I can do" and "that was no help. thanks, anyway. i have to go pound all of these screws in now. bye."

      That is professional negligence.

      If you see something stupid / pointless being required, then it is your responsibility to effectively communicate this observation to the powers that be (if you feel any responsibility toward the success of the enterprise). And "effectively communicate" can be the rub. If the existing power structure is not annoyingly disfunctional, then usually one or both of the following things will result:

      • The powers learn from your eloquent explanation and the problematic policy / decision is improved
      • You learn things you didn't know and that make you realize that things aren't as stupid and pointless as you thought

      Both of those results are good things that are likely beneficial to the enterprise.

      And if the existing power structure is annoyingly disfunctional, then you now have a new observation that it is your responsibility to effectively communicate. I've already admitted that there are lost causes here where my advice would be "Get out!". But most organizations aren't lost causes and all organizations are imperfect and most participants will see places for improvements (sometimes erroneously).

      So I guess my view is that most organizational stupidity is due to a lack of effective communication not due to a lack of thinking.

      In volunteer organizations, this "effective communication" very often has a lot more to do with creating than with talking. Yammering away about some "better way" is very often significantly less effective in a volunteer organization than spending more of your time implementing a better way. Yammering away about "the current way sucks", is usually even less effective (though it can be a starting point, of course -- though starting at just that is usually insufficient of a starting point, despite how common it is).

      So I think that better questions than "How do we get people to think?" are "How do we get our workers to believe that they can influence the powers that be?" and "How do we help workers communicate their observations and ideas effectively?".

      And for most of you reading this, the question should be "Why don't I try to communicate effectively with the powers that be?" or perhaps "more often" or "more effectively", depending. Learning how to do that will do you a world of good and will likely do a world of good for those around you.

      (Updated for spelling thanks to ysth's prodding.)

      - tye        

      Wow! An even more succinct presentation of what I was trying to say. Thank you, Herkum.

      ack Albuquerque, NM
Re: "Practices and Principles" to death
by adamk (Chaplain) on Mar 01, 2008 at 09:31 UTC
    The biggest problem with policy development, as I see it, is that they are often created in a counter-productive environment.

    In the corporate example, policies are created to enforce certain principles of interest to, say, Human Resources, but the time involved in following those practices are not accounted against that department.

    Thus, there is no incentive for that department to institute EFFICIENT processes, creating a net negative for the company.

    The same thing applies in any environment where policies are developed without taking responsibility for the costs of those policies.

    Situations like the US, where laws are often created that are "unfunded mandates" are the best example.

    Mandating that some task be undertaken without actually paying for it is just insane.

    There's ways around some of these problems, for example by making sure those processes dictate implementation thresholds. If a task/process/project is below the threshold, or the risks associated with failure are below that threshold, then they don't need to be done.
      Mandating that some task be undertaken without actually paying for it is just insane.
      Calculated move to pander to yer constituency, to pad your resume, and to increase power.
        s/insane/a massive conflict of interest/
Re: "Practices and Principles" to death
by perrin (Chancellor) on Mar 03, 2008 at 22:14 UTC
    Has anyone else found that growth of "common practices", "best practices", "policies" to rediculous levels in their jobs?

    Uh, no. In the jobs I've had over the years, I've found that most organizations never write a single automated test and engage in rampant cowboy coding. Even in the Java world, where heavy process is not usually seen as a problem, many projects seem to be run in a haphazard way with no real rules and no tests.

    I think you're extrapolating too much from one bad experience. Testing and other good practices are still rare in software development jobs (though more common in open source), and anyone trying to change that needs all the encouragement they can get.

Re: "Practices and Principles" to death
by sundialsvc4 (Abbot) on Mar 03, 2008 at 21:25 UTC

    I generally find that talk of “best practices” generally comes from the timid, the clueless, or the management. :-D

    As you observe, it very quickly devolves into a quest for affirmation by those who wish to pontificate for one reason or another. If you launch an appeal to “best practices,” presto! ... instant crowd. Nobody really understands what you are saying (nor do you), but at least they are all nodding in agreement because all those “experts” ... they published a book, didn't they? ... said it first.

    I think that there definitely are “practices” that consistently show themselves to be fruitful among experienced practitioners, and that there are other practices that clearly don't, but the truly experienced discuss them. Discussions and quarrels of that sort occur here every day. But the practitioners don't “list them and name them and sit around endlessly disussing the list.” The talk among experienced practitioners amounts to what an old-time engineer would call “kinks&rdquo:   practical bits of immediately-useful knowledge, passed along around the cracker barrel.

    And since we seem to be baring our “pet peeves” here ...

    If I do not again hear the word “patterns,” I will be a happy boy. And yet, I sorely wish that I myself had written such an attractive but vacuous book of truisms and sophistry. If I had done so, I probably would not have any further need to write computer-software. Instead, I would instead regularly walk up to podiums in hotel ballrooms while a roomful of upper-managers applauded. I would earn my bountiful living by saying nothing-at-all using just the right words. (I would not have given any of them anything of value with which to cope with their application-deployment problems; I would only have given them a lexicon by which to name them. But I would be wealthy nonetheless, because I knew they couldn't tell the difference.)

      Spoken as someone who doesn't understand metainformation. Bravo. You have added nothing to the conversation.

      It is only when something is named that it can be discussed. Period. It is in how the naming boundaries are made that true understanding can come about. Those patterns you deride allow me to discuss rather high-level concepts with other practitioners of the art with much fewer words. This allows me to communicate more clearly and, more importantly, to think more clearly. Abstraction is one of the few attributes of humanity that separates us from the animals, that allows us to survive when we are, frankly, one of the weakest animals in the kingdom.

      Or, to put it bluntly, discussing programming without the metaprogramming is like programming without subroutines.


      My criteria for good software:
      1. Does it work?
      2. Can someone else come in, make a change, and be reasonably certain no bugs were introduced?
        You have added nothing to the conversation.

        That is, to say it charmingly, not quite accurate. sundialsvc4 added enough to incite you to answer. Now, if that isn't adding something to the conversation, what is it?

        update: yes, I won't discuss whether sundialsvc4's post or your reply are substantial; but nothingness often begets enlightenment. I didn't add anything, either...

        --shmem

        _($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                                      /\_¯/(q    /
        ----------------------------  \__(m.====·.(_("always off the crowd"))."·
        ");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}
Re: "Practices and Principles" to death
by sundialsvc4 (Abbot) on Mar 17, 2008 at 14:19 UTC

    The biggest improvement that I've found over the years is Microsoft Project Server, with its Web interface. Bang, now you have something that everybody can get to with their (Microsoft...) web-browser, and they can start working out what this proposed new project is actually supposed to be before they try to say they're gonna start building it.

    (No, I'm not a Microsoft fan-boy, but probably every big company's got a copy of this software lying around somewhere, so you can implement a process-improvement without having to argue for a purchase.)

    Ninety percent of programming is just thinking. Taking long walks. Staring out the window. And... meeting. When you've designed and thrown-away two or three models “on paper” (having not spent wasted any time at all trying to build them) and then, as if by magic, “the right thing” (or perhaps a rock-solid implementable piece of it) appears out of the gloom of all those possibilities, then you'll see what I mean. Why, it seems so obvious. You draw-up the project plan and it “just works.” It clicks. “We can do this. (Here's why...) ...We know we can. Let's get started.”

    Maybe for the first time in your life your project is not in “Titanic mode.” The icebergs are right there on the chart and you just plot your course right around them. The programmers know what the target is, and the documentation people do too, and the folks who'll have to sell it (internally or externally) can see that “this is right.” The ship docks right on time, maybe a little bit early, with fuel to spare and with a team spirit like you've never seen. You “just did it...” just like you planned. Looking back, you wonder why you ever tried to do it any other way. After that experience it just seems insane that you ever thought you could possibly build a house without plans.

      I find myself unable to refrain from pointing out how much your argumentation in this (and related) threads resembles a train of thought caricatured in last Sundays Dilbert. Perfect timing, I guess :-).


      Your philosophy is better suited for rock carving than web design. -- Scott Adams

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlmeditation [id://671115]
Approved by ikegami
Front-paged by Old_Gray_Bear
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (5)
As of 2024-04-24 10:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found