Your question could be more specific to be able to give a good answer. What does the structure of your XML document looks like and how do you define a "record" in XML context? The complexity of the document plays a role. Furthermore can you quantify "high-performance"? Is it 100MB per minute?!
I have parsed 1GB XML documents and in my experience it takes time to do so. In a Perl context I have mainly used XML::Twig. I have parsed large simple XML documents (see Putting XML::Twig to the test for an example). Also you can save approximately 30% by optimizing XML::Twig. I am still working on improving my solution, i.e. doing a proof of concept with it.
I am not a fan of reading large documents into memory. In my experience it doesn’t speed up the parsing at all. Parsing a document typically generates a lot of method/function calls (the bottleneck) whether it resides in memory or not.
Let me know what solution you end up with. I have a special interest in parsing large XML documents myself.
Cheers
dHarry
In reply to Re: record-by-record XML parsing
by dHarry
in thread record-by-record XML parsing
by tbusch
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |