There is no reason why XML parsing has to be a "pig" ... or to use a better defined term ... a memory hog. It only is if you first parse the whole XML and create a huge data structure or a huge maze of objects. While at times this is what you have to do or what's most convenient to do, it's not the only solution. And often it's even not the easiest solution. It's quite possible and often quite convenient to process XML in chunks using something like XML::Twig or XML::Record or specify what parts of the XML are you actually interested in and which ones can be ignored, buils a specialized datastructure as you parse the data and (if convenient) handle the chunks with XML::Rules.
Neither will continue eating up memory as the XML grows.
In reply to Re^2: Memory Efficient XML Parser
by Jenda
in thread Memory Efficient XML Parser
by perlgoon
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |