Nothing is 100% reliable. Things can still be useful though. When a baseball player goes into bat, does he expect to hit a home run every time? Of course not.
Don't like that example? How about UDP. When you send a UDP packet across the network it is not guaranteed to arrive. The protocol has still been used for many things for a long time.
-Andrew.
| [reply] |
I would not want any of my database programs to have a data
collection average of even a Hall of Fame baseball player. A Ted Williams program, for example, would get its data less than 40% of the time.
UDP is, in of itself, unreliable. But programs using it are not necessarily so. Would you use unreliable NFS? I doubt it, but NFS uses UDP. But it also uses RPC, which has a request/reply protocol that allows for retransmissions should a reply not arrive within a specified window. Other UDP applications (e.g., video or audio streaming) may allow a certain level of loss because they know that some percentage of dropped packets will be undetectable by the human using the application. The key is that they have methods they can use within, or layered above, the UDP protocol (e.g., sequence numbers) that allow the receiver to monitor and detect the (un)reliability of the underlying UDP protocol.
Reliability/speed tradeoffs are acceptable for services but the (un)reliability has to be detectable.
| [reply] |
All of this is true. I guess when I say reliability, I mean the extent to which a user or developer mistake or bug causes the system to fall over.
Specifically talking about the database design. Say someone made a typo in the column name. The database would happily autovivify a new column for them. Same as in Perl if you make a typo in a hash key. A new entry is created.
For some applications, such as prototypes or small systems, or simply systems that don't matter that much or have short lifetimes - this is an acceptable and reasonable tradeoff.
| [reply] |
True, all to true, but when I put something in a database, I expect to find it back. It is bad enough as it is that probably no complex piece of software is without bugs, but deliberately adding unreliability is too much if you ask me.
CountZero "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law
| [reply] |
Of course I don't mean that the database isn't reliable in that sense.
When I speak about reliability, I mean the extent to which the system tolerates a developer or user error. If a developer or user makes a mistake then a less reliable system ungracefully falls over. A reliable system catches the error as soon as possible.
One of the advantages of typing is that it catches garbage data earlier, when you try to assign a value of the wrong type. A typless system happily takes the data into the system, causing problems later.
| [reply] |