in reply to Memory Efficient XML Parser

XML parsing is always a bit of a pig. Even so, I really question this situation. You must have some seriously complex XML.

100M files? When you said big I thought you were going to say a GIG or two. 100M? I've hit nearly 100M for my iTunes library xml file. Do you mind if I ask how much system memory do you have? Are you sure this is an XML issue? Can you describe the issues you are experiencing in a bit more detail?

--
I used to drive a Heisenbergmobile, but every time I looked at the speedometer, I got lost.

Replies are listed 'Best First'.
Re^2: Memory Efficient XML Parser
by perlgoon (Initiate) on Dec 12, 2007 at 21:02 UTC
    Thank you for all your replies.
    Unfortunately I'm not 100% positive that it is an XML issue, however based on my benchmarks I'm pretty sure it is. The server has 1.25 GB of RAM. It experiences relatively large loads throughout the day (~5 requests per second).
    The main script on the server receives an XML packet from another server. The script takes the XML and parses it using XML::Records. The parsed records are used to build up a MySQL query. This query is then executed and the script is ended. The MySQL is on another server entirely (obviously with its own set of dedicated resources).
    Here is a snippet of pretty much all the script is doing:
    for my $subpkg (@$pkg) { $sql .= "$delim($subpkg->{field1},$subpkg->{field2},$subpkg->{fiel +d3})"; $delim = ","; } $query = $db->do($sql);
    I first thought it may be an issue with string concatenation when building up the query. However my testing has shown that concatenating strings in perl was just as efficient as running a join on an array.
    The only thing I notice consistently is that when I send a 1MB XML packet to the script, it uses low memory (~4000K), however when I send it a 30MB XML packet, it uses almost triple the amount of memory (sometimes as high as 20000K).
    I know that the server will most likely need a memory upgrade however I want to make sure the script is running as efficiently as possible. I'm currently looking into using XML:Twig.

      Wait a second, could you show us a bit more of the code? It looks as if you were first extracting all data in the XML, building a huge string and then tried to shove the whole string into the database. I'm not surprised the script and the database need a lot of memory and CPU time to cope with that!

      It would be much better to parse one row, insert it to the database using a prepare()d statement handle, forget its data, parse the next one ... and if you want to optimize it and don't mind that it's a tiny little bit more complex, open the database connection with AutoCommit=>0 and commit only after every 1000 rows (you may need to do some benchmarking to find the right number here).

        You're actually exactly right however the database server is not have any issues executing this large query. In fact it is one of the fastest queries that runs on my server with the optimization that is done. I found that it is less efficient to loop through hundreds of queries one by one, opposed to running one large query filled with joins. I'm now parsing the XML with a simple regexp and it uses less than half the memory when using the XML:Records module. Thank you for everyones help, but it appears going back to the basics have solved my memory issues for the time being.
Re^2: Memory Efficient XML Parser
by Jenda (Abbot) on Dec 13, 2007 at 13:57 UTC

    There is no reason why XML parsing has to be a "pig" ... or to use a better defined term ... a memory hog. It only is if you first parse the whole XML and create a huge data structure or a huge maze of objects. While at times this is what you have to do or what's most convenient to do, it's not the only solution. And often it's even not the easiest solution. It's quite possible and often quite convenient to process XML in chunks using something like XML::Twig or XML::Record or specify what parts of the XML are you actually interested in and which ones can be ignored, buils a specialized datastructure as you parse the data and (if convenient) handle the chunks with XML::Rules.

    Neither will continue eating up memory as the XML grows.

      I *do* think that something is wrong with XML in terms of resources. Consider the XML and Storable files generated by this script (note that you should lower the number of records if you are short on RAM as the following tests will take near 1GB of memory or so):
      use constant RECS => 1000000; { open my $fh, ">/tmp/bla.xml" or die; select $fh; print "<addresses>\n"; for (1..RECS) { print <<EOF; <address> <name>John Smith</name> <city>London</city> </address> EOF } print "</addresses>\n"; } { require Storable; my @addresses; for (1..RECS) { push @addresses, { name => "John Smith", city => "London" }; } Storable::nstore(\@addresses, "/tmp/bla.st"); }
      Two mostly equivalent data sources. Now the two benchmarks (I am using tcsh's time command here, showing system, user, elapsed time and maximum memory):
      $ ( set time = ( 0 "%U+%S %E %MK" ) ; time perl -MStorable -e 'retriev +e "/tmp/bla.st"' ) 1.980+0.384 0:02.41 193974K $ ( set time = ( 0 "%U+%S %E %MK" ) ; time perl -MXML::LibXML -e 'XML: +:LibXML->new->parse_file("/tmp/bla.xml")->documentElement' ) 6.037+1.876 0:08.15 643952K
      So naive parsing of XML is much worse in both memory allocation and CPU time than loading the same Storable file. I guess that most other fast serializers like YAML::Syck or JSON::XS will give similar results.

        The data structure built by XML::LibXML is much bigger because it contains much more data. Most of that of no use whatsoever, but still present. It remembers whether the name or city was first, how much whitespace there was around them, that the John Smith and London was the content instead of an attribute etc. Please reread what I said ... you do NOT have to build such a structure. And even if you do build a structure first you can build a specialized (containing only what you need and in a convenient format) data structure. In this particular case this would build a structure equivalent to the one created by Storable using XML::Rules:

        use XML::Rules; my $parser = XML::Rules->new( stripspaces => 3, rules => [ _default => 'content', address => 'no content array', addresses => 'pass no content', ] ); my $data = $parser->parse($XML); use Data::Dumper; print Dumper($data->{address});
        As the transformations are done during the parsing you end up using just a little more memory than the Storable solution. Though it will of course be somewhat slower. Everything comes at a price, even generality. (The version of XML::Rules that supports stripspaces will be released this weekend, the currently released version would actually keep on eating memory because it would keep the whitespace between the <address> tags until the XML is fully processed. I'll update this node once it's released. Sorry.)

        Unlike Storable you can process the XML in chunks:

        my $parser = XML::Rules->new( stripspaces => 3, rules => [ _default => 'content', address => sub {print "$_[1]->{name} for $_[1]->{city}\n"; ret +urn}, addresses => '', ] ); $parser->parse($XML);

        Of course if you need to store something and the read it completely both using a Perl script then Storable is a better solution, but if you need to exchange data with other systems, XML is most likely the way to go. Whether you waste memory and CPU while processing it or not is up to you.