in reply to Re^4: Fetchrow question
in thread Fetchrow question

alainb:

It looks like you should be able to reduce the number of database calls. The method will likely differ based on the database, but you should be able to reduce the number of operations for each set, possibly to a single transaction. Examples:

MySQL

MySQL allows you to insert multiple records with a single statement, like so:

insert into FooBar (ID, col1, col2, col3) values ((select max(ID)+1 from FooBar), 'a', 1, 'flort'), ((select max(ID)+2 from FooBar), 'b', 2, 'zap'), ((select max(ID)+3 from FooBar), 'c', 3, 'kazoo')

This would let you use a single communication for each table type.

MS SQL Server / Sybase / Oracle / etc.

insert into FooBar (ID, col1, col2, col3) select ID, col1, col2, col3 from ( select max(ID)+1 from FooBar) ID, 'a' col1, 1 col2, 'flort' col3 union select max(ID)+1 from FooBar), 'b', 2, 'zap' union select max(ID)+1 from FooBar), 'c', 3, 'kazoo' )

Oracle

Some database drivers, such as Oracle, let you send a block of SQL to execute, so you can build blocks and do all your inserts with one swell foop:

declare v_ID number; begin -- First table select max(ID) into v_ID from FooBar; insert into FooBar (ID, col1, col2, col3) values (v_ID+1, 'a', 1, 'flort'); insert into FooBar (ID, col1, col2, col3) values (v_ID+2, 'b', 2, 'zap'); insert into FooBar (ID, col1, col2, col3) values (v_ID+3, 'c', 3, 'kazoo'); -- Second table select max(ID) into v_ID from BarBaz; insert into BarBaz (ID, col1) values (v_ID+1, 'etcetera'); end;

Using a work table

Another option, that I frequently find useful, is to create some temporary work tables and bulk load the data into them. The bulk loader should be about the fastest way to get data into the database. Then create a stored procedure that integrates the data in the work tables into the real tables. For adding large sets of data to a database, this is pretty hard to beat.

...roboticus

When your only tool is a hammer, all problems look like your thumb.

Replies are listed 'Best First'.
Re^6: Fetchrow question
by mje (Curate) on Mar 22, 2012 at 09:26 UTC

    In addition to the excellent advice from roboticus, both DBD::Oracle and now DBD::ODBC (from 1.34) support DBI's execute_for_fetch. You might be wondered why this helps (given the name) with multiple inserts but it is because DBI's execute_array uses execute_for_fetch. With execute_array you can prepare an insert and provide as many rows of data for the parameters as you like and the DBD will send them in batches (thus vastly reducing network traffic). It is usually a lot faster.