in reply to Threading - getting better use of my MP box
If you're simply reading records and then inserting, you can simply split the file into chunks and process each in a separate forked process (which, on Unix, is probably faster than perl ithreads) each with it's own DB connection.
If you're doing intensive processing, perhaps you can split that into 2 (or more) phases - then you can implement each phase in a separate Perl program and connect them with pipes, the last phase of which will have the DB connection. This is still only using one DB connection. I'm not sure if you need to use multiple connections for MySQL to use multiple backend processes/threads but you could possibly run this 'pipeline' multiple times also.
Another completely unrelated option you can consider is to do all the processing into a flat file, slurp that over to the db machine and use LOAD DATA LOCAL (or whatever it is - that's from memory) to bulk load the data. That will save you a huge amount of database overhead on that sort of volume of data.
|
|---|