in reply to The future of software design
As others have said, your analogy is flawed.
I think it is perfectly valid to compare the software industry with either the automobile manufacture or civil engineering. The flaw comes from picking out two specific parts of the two industries for your comparison.
| Attributes | Automobiles | Software |
|---|---|---|
|
General purpose Large volume "For the masses" |
General Motors Ford VW-Audi Renault Fiat Nissan Toyota Honda |
Microsoft IBM CA Logica Fujitsu |
| Specialist/luxury | Cadillac BMW Lexus | Apple - Sun Solaris |
| High performance | Ferrari | Cray |
| Enthusiast/homebuild | Lotus Seven | Linux |
Of course, there could be many more categories and sub-categories, and we could have endless debates about which category any given manufacturer/car/os/package should be in. The point is, despite the marked differences in raw materials/replication costs, when you look at the broad spectrum of the two industries, there are valid analogies to me made. It only when you get into the specific details that these tend to break down.
<Minor personal rant>
If the software development and manufacturing industry continues to be able to get away with charging relatively high prices whilst hiding behind "No warranty, stated or implied" licensing agreements and so legally avoid culpability for costs and damages incurred as a result of their failure to test and/or fix, the mess of security and other problems (usually attributed solely to MS, but affecting many other manufacturers too) will continue.
Can you imagine buying a car if you were forced up front to sign an Agreement waiving responsibility for its safety?
Without the laws that allowed class action suits to be taken against the auto-industry, we would all still be driving cars with the unenviable safety records of those developed in the 60's and 70's.
Were there similar legal remedies for failures in software products, it's quite possible that the security scares of Melissa, code-red and many others would have been trapped and dealt with during development.
Government sponsored safety testing might also help.
As a footnote in deference to the FSF guys. If you ask your neighbour to fix your cars clutch for a beer, you don't hold him to the same standards as say Firestone whom manufacture and sell mass market products. Nor, if you buy spares for your 70's Mustang via E-bay, do you expect 1 year warranty and full liability.
</Minor personal rant>
In terms of the future of the industry, in the same way that many of the mundane tasks of car manufacture--spot welding, crank grinding, panel forming etc-- are now done by robots and other automated machines. I see the software industry moving to automated testing, schema design, log processing etc.
Logging is a good example of a common software task that is desperately in need of standardisation. The basic requirements of logging are a pretty standard regardless of the application doing the logging. Who, what, when, why, how much. MS os's have (IMO) made a (small) advance over Unix-like os's in this regard with the EventLog mechanism. A standard interface that any application can use to log events in a centrally organised place. This greatly simplifies the detection of inter-application event timing and similar maladies. A utility that allows these to be viewed and queried (in a very limited way) and another API that allows further processing of the logged data.
Of course, this still stops way short of the ideal. Even with this, its a common task to move the information from the proprietary data store to a centralised (usually relational) datastore so that cross-systems correlation's can be found and reported. Why does every company, big and small, need to home-brew these type of solutions?
Personally, I think that every OS should have a single, central datastore for "system information". I think I would favour an OO-datastore solution as this seems to have greater flexibility than relational DBMS's. If all system information existed in such a central place, the need for so many scripts that massage one set of data so that it can be utilised in conjunction with a disparate set of related data would go away. Additionally, once a single system's data is stored in a single, standardised place, it becomes a short step to making these system stores talk to each other via the network and cross-systems correlation, control and reporting suddenly becomes much easier.
(Anyone remember Pick?)
Note though, that the automating of these type of tasks does not mean the end of software development. It just shifts the emphasis away from the mundane to the interesting. Someone will still have to analyse, specify, code, test and maintain the software that performs these functions. These will, I think, be the desirable (and well paid) jobs in the future of software.
I also predict a move away from low-level languages like C in favour of higher-level, interpreted languages like Perl (and Java, Python, Rebol, Ruby etc.) I think the next Visicalc (killer application) will likely or not be an interpreted language with a 'compile-to-native-code-and-optimise option', supporting a flexible, extensible, object-oriented datastore. For maximum benefit, this language would have network awareness, data persistence along with code maintenance (pretty printing), version control (CVS), and code production (editor) built in.
Why, given that Perl knows how to parse Perl, do I need to use 'third-party' products to re-parse the code in order to syntax highlight it, pretty print it or detect the differences between versions? This is considerably easier to do if you do it at the byte-code (op-code) level than at the source level.
This could, I think, be the future of language development. It could also be the future of Perl 6. Though from my limited perspective, it would require many more hands at the pumps in order to fulfil these 'requirements' in a timeframe that would allow it to be the leader. The amazing thing to me is that the obviously willing and capable resources of 'part-time firemen' that are available to man these pumps is not being exploited.
Of course, it would also require considerable organisation and management, and with that comes the specter of cost and inevitably charges to offset that cost. I think that this issue could be addressed without the need to compromise the basic tenets upon which Perl was based, given the goodwill of the community.
However, there may be other compromises that might be less palatable. The need to accept the Real World situation of what systems exist and in what numbers. The possibility of moving away from the GNU licensing arrangement to something that allowed costs recuperation in some manner might be a lot harder for some people to accept. It could also risk dividing the aforementioned goodwill, which would not be a step to take lightly.
Anyone up for helping me develop an OO-datastore?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: The future of software design (quasi-OT)
by l2kashe (Deacon) on Oct 21, 2002 at 18:23 UTC |