in reply to Multithreading and DBI

Do you find that the execution times of individual queries are related in a consistent way to the amount of data returned? If so, how are you actually fetching the data through DBI? (Surely you could show us a few lines of code that actually do the fetching...) Some DBI fetching methods involve more overhead than others, and the impact could be felt in proportion to the amount of data being fetched.

And of course, if you're dealing with really massive amounts of data being returned from the database, well... that's going to take time no matter what -- not only for the actual transfer from oracle to your perl script, but also potentially for memory swapping of large perl data structures.

When you say the longest execution time for one query is 10 minutes, what steps does that time span actually include? Just the "$dbh->execute()" step? the execute plus fetch? other stuff?

You make it sound as though virtually all the execution time in the perl script is taken up by the queries, suggesting that it's all just a matter of waiting for the server to satisfy each query (whatever "satisfy" includes in your timing measurements).

So I think ikegami's initial reply is probably the best thing to start with: isolate the query activity from everything else, and compare timing results between running six query-only jobs in succession vs. running those jobs simultaneously.

Make it really simple: read the sql file, connect to oracle, prepare/execute/fetch and store the query results to a disk file to make sure you got it right. If, as most of us expect, the server will perform worse when trying to do multiple queries at once, then threading or forking is obviously not the way to go, and you'll need to look elsewhere to optimize (e.g. how bad would it be to take the longest/hardest query and cache the results on disk somewhere, so you don't repeat that query on every run of your "tooo long" script?)