I have done it myself when I was doing a proof of concept some years ago, i.e. check if it would be possible to parse large XML files. With a little program I generated several simple but big XML files. Next I used an event based parser. It worked fine and was scalable as well. Parsing the file should not be a problem and you can try different parsers to see what suits you best.
I like the suggestion of pc88mxer. As it is a dump from a relational database the structure of the resulting XML files is probably very simple. By utilizing this you’ll get the best performance.
Another option would be to chop up the file/dump it in more manageable chunks but I don’t think disk space or a few hundred MB’s of memory is a problem for you.
Saludos,
dHarry
In reply to Re: XML parsing - huge file strategy?
by dHarry
in thread XML parsing - huge file strategy?
by ethrbunny
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |