Well, there are two really. First, every XML file will have the same general structure but not the same information. For example, the XML file A may have a social security number while B doesn't have one. Or, study C may contain EKG readings that don't show up in A or B. Using DOM, I would have to build a level of intelligence and redundancy to ensure that the script doesn't crap out if it doesn't see certain information. From what I've read, the event driven model is more appropriate, since I getting data fed from a another server.
the other is speed. On my local workstation, the script runs fine. But this module will be part of a (slightly) larger automated script for a hospital in which multiple patients (XML files) will input at the same time. Like most patient care systems, I can't afford for this thing to crash b/c of memory hogging.
In reply to Re^2: Rewriting XML::DOM based module as XML::SAX
by Cappadonna3030
in thread Rewriting XML::DOM based module as XML::SAX
by Cappadonna3030
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |