Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi I need to write a program that will check the contents of a xml doc against rules taken from a database. I needed some tips on how to make it as efficient as possible. The specs were:

the engine - a daemon that listens for requests on a certain port. When the daemon is started up it hits the database and stores all the rules in memory.

this is how the process would work

client -> feeds xml doc -> the engine

than

the engine processes the rules against the values in the xml doc and returns -> xml doc back to the client with weather each rules passed or not

a sample of a rule might be:
(##state## eq ca){ ##max_ticket_price## = ##face_value## + 3; }


The engine needs to be able to handle a lot of requests and be able to que requests somehow. I was thinking

client request -> que -> engine

engine -> que -> process rules -> client

any tips would be appreciated.

thanks

Replies are listed 'Best First'.
Re: XML Process Engine
by rjray (Chaplain) on Jul 02, 2002 at 06:23 UTC

    Well, this isn't a simple question, because you are presenting a design issue more than a coding issue.

    There are essentially two issues here:

    1. Parse/process the XML into an internal structure that can be evaluated against the rules
    2. Express the rules in such a fashion that lends itself to a clean machine-evaluation implementation

    It's not clear from your explanation whether you are evaluating an entire XML document at once, against a set of rules, or whether you are evaluating a series of structures out of a document in sequence. That is, is the XML just a layer around a series of RDBMS records, or is it more structured than that?

    I'm inclined to suggest that you use a SAX-based approach to managing the XML itself. If you are testing multiple data-sets from the same file, it will be easy to associate that action with the end of the appropriate element. And if not, you can still build the data structure in whatever way suits you. XPath may be a useful alternative, but like I said it's difficult to say since I'm not clear on all the specific needs you have.

    As to the part where you actually evaluate the rulesets against the data, that too depends a lot on what you are trying to accomplish and what information you have to work with when setting up the system. In you example, it looks like token-replacement, probably based on the name of the tag used to mark-up the data. That's why I wonder if this is just XML sugar-coating of RDBMS data, because it's overkill if that is the case. You may be doing a lot more work than you need to be. The named-token syntax doesn't appear to have any allowance for the structure of the XML document itself-- what if <state> occurs as a tag at different levels, with different significance at each?

    If you can reduce the rule-evaluation problem to a matter of input data coming in to the rule in a hash table or some other keyed-reference structure, then create a rule engine to evaluate against this types of inputs, you should be OK. Then it's just a matter of turning the XML input into an appropriate hash-table.

    Good luck. Sorry this wasn't more definitive, but the problem itself is a little loosely defined.

    --rjray

    Referenced CPAN modules: