in reply to Implementing a buffered read-and-insert algorithm
You mention that portability is a large concern, so you want to avoid db-specific bulk loaders. Writing a script that would create the various DB specific loader files wouldn't be that difficult and would actually probably be a whole lot more beneficial. I find many uses for the bulk loading abilities of Oracle (and have also found that few people really know the true power and help they can bring to the table). And the performance gains you get with these utilities are HUGE. We are not talking just 10-20%, but in many cases, hundreds to thousands of times faster than normal inserts.
Also, if you are going to stay with the standard insertion method:
Threads are mentioned, but if there are any constraints set up on the table then you'll lose any performance increase on the inserts as the db has to deal with checking constraints on two sets of insertions, not a single stream of them.
500 rows between commits may be something that you can tweak (either larger or smaller) to see additional performance gains based on the performance and memory of your current DB.