in reply to Re^7: Incremental XML parsing
in thread Incremental XML parsing

Asked and answered: "...but also to be able to consume the input document in pieces, such as feeding data as it arrives over the wire." If the data is too large for the filesystem, you might want to process it as it arrives.

Replies are listed 'Best First'.
Re^9: Incremental XML parsing
by TJPride (Pilgrim) on Feb 05, 2012 at 07:56 UTC
    Too large in what way? I don't really see how you can gain anything by parsing just part of the XML at a time, unless you only need something near the start of the XML - but in that case you could just as easily keep doing a check for the end tag of the section you want, parse out just that part, and feed it to your regular XML parser. Similarly, if your document is a long series of records, you could parse out each record as it arrives and feed it to your regular XML parser. There's no need to go looking for an incremental solution, imho.
      Really, you need a definition of large? My example was if the file couldn't fit on the filesystem. Does that need clarification? And yet another person who didn't read anything else in the thread. I mentioned an existing incremental parser in my OP: XML::SAX::Expat::Incremental, but my problem was it's too slow. Your proposal would be even slower.

        So it doesn't fit on the filesystem, fine, where does it come from then? If it comes from a socket or another similar source, then just point XML::Rules or XML::Twig at the socket and the data will be parsed as it comes and the defined handlers will get called as soon as a defined logical unit (read ... tag with its children) is complete. What do you keep in memory or on the disk after processing that logical unit is up to you.

        Stop looking for something containing the word "incremental" in its name!

        Jenda
        Enoch was right!
        Enjoy the last years of Rome.

        Your proposal would be even slower.

        I doubt it