in reply to Re^2: Memory Efficient XML Parser
in thread Memory Efficient XML Parser

Wait a second, could you show us a bit more of the code? It looks as if you were first extracting all data in the XML, building a huge string and then tried to shove the whole string into the database. I'm not surprised the script and the database need a lot of memory and CPU time to cope with that!

It would be much better to parse one row, insert it to the database using a prepare()d statement handle, forget its data, parse the next one ... and if you want to optimize it and don't mind that it's a tiny little bit more complex, open the database connection with AutoCommit=>0 and commit only after every 1000 rows (you may need to do some benchmarking to find the right number here).

Replies are listed 'Best First'.
Re^4: Memory Efficient XML Parser
by perlgoon (Initiate) on Dec 18, 2007 at 16:53 UTC
    You're actually exactly right however the database server is not have any issues executing this large query. In fact it is one of the fastest queries that runs on my server with the optimization that is done. I found that it is less efficient to loop through hundreds of queries one by one, opposed to running one large query filled with joins. I'm now parsing the XML with a simple regexp and it uses less than half the memory when using the XML:Records module. Thank you for everyones help, but it appears going back to the basics have solved my memory issues for the time being.

      Could you show us the code? I seriously doubt shoving a huge SQL string that has to be all parsed and compiled by the server is quicker than preparing a statement and then sending the server just the values for each row. Especially if you only commit reasonably sized batches.

      Parsing XML with regexps is something that might look like it works now, but unless you have strict control over whatever produces the XML you may run into serious problems.

      Show us the code!