in reply to Re (tilly) 1: Mechanisms for Fault-Tolerant Perl Scripting
in thread Mechanisms for Fault-Tolerant Perl Scripting

I think two things about my question are implicit that I will now make explicit:
  • I like the database idea. The only catch is state preservation. In a language like Lisp, where data and function are interchangeable and all data and function has a unique printed representation, executable "codelets" can be written and positions and the "codelets" stores in a tilly-esque transaction. In perl, code blocks do not have a printed and re-executable representation unless you break down to string manipulations to create and eval your code, which is highly unstructured and error-prone.
  • What we have hear is what Douglas Hofstadter defined as GOD in "Goedel, Escher, Bach, the Eternal Golden Braid." GOD == God Over Djinn. So each god has a higher God ad infinitum. Once you have the database serving as a god for the simple perl code, you have to ask who is the God for the database code? And then God for that?

    And then we conclude that linear thinking leads to the unreal, impossible, and irrational concept known as infinity --- scientfic man's attempt to justify a day-to-day practical form of reasoning (linear) when cyclical reasoning is actually a bit more suited to reasoning about Universal truths.

    • Comment on RE: Re (tilly) 1: Mechanisms for Fault-Tolerant Perl Scripting
  • Replies are listed 'Best First'.
    RE (tilly) 3: Mechanisms for Fault-Tolerant Perl Scripting
    by tilly (Archbishop) on Oct 09, 2000 at 21:26 UTC
      I would make state an enumerated type. There is no need to have the database know the details of how a step is to happen or what language it is implemented in. Conversely there are significant security issues in trusting such information from a database machine. Should your database be spoofed or compromised, it is better to just have your process fail rather than giving the attacker a chance to compromise your machine as well!

      The God for the database code is the fact that it is designed to properly log all transactions and be in a position where at all times it is in a consistent state on disk. Even if there is hardware failure. And if you have a hot backup, you may even be protected from that.

      Getting failover, etc right is a hard problem. Databases already solve it, so leverage off of what they do rather than rolling your own.

      Note that this basic transaction-oriented strategy scales to very complex problems, and gives you systems that are robust by default. (One I dealt with got nicknamed "accidentally robust" because of how many unexpected error conditions came up, had never been considered, and didn't cause serious problems.) Normally when you rely on certain things having just worked and others being about to, when something goes wrong (eg a done file doesn't get written) things careen off-course into a nightmare. But with this strategy you automatically stop at a point where minimal damage has happened, other parts of the process can continue, and when the underlying problem is fixed, you are generally in an easily recoverable state...