I have an XML file consisting of around 10m lines, which I would like to import to a Mysql database.
I have installed XML::RDB and written a script, and it works great on a subset of around 100,000 lines of XML.
When I try to call $rdb->make_tables(...) on the main file, it seems to fail. The memory footprint went up to 23G (on a box with 24g RAM), then the process went into status 'D' (un-recoverable sleep), and had to be killed.
I have a script for splitting the main file into smaller files, but I can't get this to work either, because of the way XML::RDB seems to work.
Using the split aproach, I would call $rdb->make_tables(..) on the first batch, then import the resulting DB schema, followed by $rdb->populate_tables(...), then on the second batch, only call $rdb->populate_tables(..), thinking that the schema already being there, the data would import into the existing tables.
Instead, there are hundreds of messages like this one:
So I guess XML::RDB is not suited to splitting the import in this way, owing to the way it implements the tables against the XML.DBD::mysql::st execute failed: Duplicate entry '3' for key 1 at /usr/l +ocal/share/perl/5.10.0/DBIx/Database.pm line 150
My questions are:
i) Is there a way to get this working via XML::RDB anyone can suggest
ii) Is there a better way to get the data in to a form where querying it would yield the performance of a Mysql query against an indexed database.
Thanks for reading.In reply to XML RDBMS import by jamesd256
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |