in reply to Re: Memory Efficient XML Parser
in thread Memory Efficient XML Parser

There is no reason why XML parsing has to be a "pig" ... or to use a better defined term ... a memory hog. It only is if you first parse the whole XML and create a huge data structure or a huge maze of objects. While at times this is what you have to do or what's most convenient to do, it's not the only solution. And often it's even not the easiest solution. It's quite possible and often quite convenient to process XML in chunks using something like XML::Twig or XML::Record or specify what parts of the XML are you actually interested in and which ones can be ignored, buils a specialized datastructure as you parse the data and (if convenient) handle the chunks with XML::Rules.

Neither will continue eating up memory as the XML grows.

Replies are listed 'Best First'.
Re^3: Memory Efficient XML Parser
by eserte (Deacon) on Dec 13, 2007 at 21:23 UTC
    I *do* think that something is wrong with XML in terms of resources. Consider the XML and Storable files generated by this script (note that you should lower the number of records if you are short on RAM as the following tests will take near 1GB of memory or so):
    use constant RECS => 1000000; { open my $fh, ">/tmp/bla.xml" or die; select $fh; print "<addresses>\n"; for (1..RECS) { print <<EOF; <address> <name>John Smith</name> <city>London</city> </address> EOF } print "</addresses>\n"; } { require Storable; my @addresses; for (1..RECS) { push @addresses, { name => "John Smith", city => "London" }; } Storable::nstore(\@addresses, "/tmp/bla.st"); }
    Two mostly equivalent data sources. Now the two benchmarks (I am using tcsh's time command here, showing system, user, elapsed time and maximum memory):
    $ ( set time = ( 0 "%U+%S %E %MK" ) ; time perl -MStorable -e 'retriev +e "/tmp/bla.st"' ) 1.980+0.384 0:02.41 193974K $ ( set time = ( 0 "%U+%S %E %MK" ) ; time perl -MXML::LibXML -e 'XML: +:LibXML->new->parse_file("/tmp/bla.xml")->documentElement' ) 6.037+1.876 0:08.15 643952K
    So naive parsing of XML is much worse in both memory allocation and CPU time than loading the same Storable file. I guess that most other fast serializers like YAML::Syck or JSON::XS will give similar results.

      The data structure built by XML::LibXML is much bigger because it contains much more data. Most of that of no use whatsoever, but still present. It remembers whether the name or city was first, how much whitespace there was around them, that the John Smith and London was the content instead of an attribute etc. Please reread what I said ... you do NOT have to build such a structure. And even if you do build a structure first you can build a specialized (containing only what you need and in a convenient format) data structure. In this particular case this would build a structure equivalent to the one created by Storable using XML::Rules:

      use XML::Rules; my $parser = XML::Rules->new( stripspaces => 3, rules => [ _default => 'content', address => 'no content array', addresses => 'pass no content', ] ); my $data = $parser->parse($XML); use Data::Dumper; print Dumper($data->{address});
      As the transformations are done during the parsing you end up using just a little more memory than the Storable solution. Though it will of course be somewhat slower. Everything comes at a price, even generality. (The version of XML::Rules that supports stripspaces will be released this weekend, the currently released version would actually keep on eating memory because it would keep the whitespace between the <address> tags until the XML is fully processed. I'll update this node once it's released. Sorry.)

      Unlike Storable you can process the XML in chunks:

      my $parser = XML::Rules->new( stripspaces => 3, rules => [ _default => 'content', address => sub {print "$_[1]->{name} for $_[1]->{city}\n"; ret +urn}, addresses => '', ] ); $parser->parse($XML);

      Of course if you need to store something and the read it completely both using a Perl script then Storable is a better solution, but if you need to exchange data with other systems, XML is most likely the way to go. Whether you waste memory and CPU while processing it or not is up to you.

        Unlike Storable you can process the XML in chunks
        This is just a limitation of Storable.pm as is, not a limitation of the Storable file format. It should be possible to write an event-handling Storable parser.