Can you try slurping in the XML content, and using perl to zap the encoding, before feeding to XML::Twig ?
That's what I figured I would need to do (reading in 128 MiB chunks in this case; 73 GiB is a bit much to slurp in all at once ;) ). I just wanted to see if there was an "official" method before I broke out the full brute-force nuclear option.
(Of course it probably goes without saying that the ideal solution is to fix this fluster-cluck of an XML travesty...)
EDIT:
I just saw the SO link in your update after I posted my reply. That particular example will not work in this because the data is not UTF-16 even though the encoding says otherwise.
| [reply] |
If it is too big to slurp .. you could "pipe-filter" it through.
I haven't tried this, but you should be able to setup a FIFO , and pass a handle to LibXML to read.
Then read your XML file, a record at a time, filter out the encoding garbage, and feed it to the FIFO.
Probably need 2 threads to make this work in a single program.
All power corrupts, but we need electricity.
| [reply] |
| [reply] |