I still don't see why "large document" == "need incremental parser". I can see why you would need a stream parser (e.g. *::SAX XML::Twig XML::Rules). Non-incremental stream parsers parse large documents just fine without parsing the entire document into memory. | [reply] |
Asked and answered: "...but also to be able to consume the input document in pieces, such as feeding data as it arrives over the wire." If the data is too large for the filesystem, you might want to process it as it arrives.
| [reply] |
Too large in what way? I don't really see how you can gain anything by parsing just part of the XML at a time, unless you only need something near the start of the XML - but in that case you could just as easily keep doing a check for the end tag of the section you want, parse out just that part, and feed it to your regular XML parser. Similarly, if your document is a long series of records, you could parse out each record as it arrives and feed it to your regular XML parser. There's no need to go looking for an incremental solution, imho.
| [reply] |