in reply to XML parsing - huge file strategy?
I would think indexing the table is worth it. It's a one time cost that will speedup all the rest of your processing. That's what databases are for.
But since I am more of an XML guy... if you decide to go the XML route... oddly enough I'd recommend using XML::Twig for this ;--)
That said, If XML::Twig is too slow, you could also use the xml_split tool that comes with it to break down the file n smaller pieces. As long as you don't use a complex XPath expression to split the file (i.e. if you only use level in the tree, size or number of records), the tool will use directly XML::Parser and will probably be faster than XML::Twig. Plus it already does what you want, splitting the XML file, so why code it yourself!
you can then process each file.
And if all else fails, and the records are all in elements of the same name, you can set $/ to the closing tag,read the file one record at a time and spit them out in several files.
But really, it's data, it should live in a database.
|
|---|