Good thought. A bulk insert for Oracle should be SQL*Loader, as this bypasses the rollback segmenets (less overhead ).
Use of bind variables in sql statements can also be a big help, as sql statements can then be parsed (in Oracle) only once. See the DBD::Oracle doc for use of bind variables; some of my example code is below. Example code is for loading binary objects into a DB, but does use bind variables. Also, there's probably a more efficient way to write this, but it worked for me.
# load modules
use DBI;
use DBD::Oracle qw(:ora_types);
# connect to the db
my $dbh = DBI->connect("dbi:Oracle:tlin","USERNAME","PASS")
or die "Cant connect to db! $DBI::errstr\n";
$LONG_RAW_TYPE=ORA_LONGRAW; # oracle type id for long raw/blob ( could
+ also be 24)
$bufsize = 400000; # putting this here, since we use it everywhere
$dbh->{LongReadLen}=$bufsize; # dont set this too high; performance su
+ffers
$dbh->{LongTruncOk}=0;
# find all the .xls in a dir and load them to db
while ( defined(my $filename = glob("c:\\play\\tmp\\*"))){
open (BLOB, "<$filename") or die "Cant open $filename\: $!\n";
binmode(BLOB);
my $bytes = 0;
my $buf;
$bytes = read(BLOB, $buf, $bufsize);
if ( $bytes > $bufsize ){
print STDERR "File $filename too large to load. Skipping.\n";
close(BLOB);
next;
}
close(BLOB);
# fix the file name; one undef for each part of the path which is
+not a file name.
my ( undef, undef, undef, $storname ) = split /\\/, $filename;
my $sth=$dbh->prepare("
insert into blobtest \( id\, fname\, fobj \)
values \(
blobid\.nextval\, \'$storname\'\, \:blob
\)
") or die "Cant prep\: $DBI::errstr\n";
$sth->bind_param(":blob", $buf, {ora_type=>$LONG_RAW_TYPE});
$sth->execute or die "Cant exec\: $DBI::errstr\n";
print "File\: $filename loaded\n";
};
More generally, take a look at overall Oracle parameters / tuning. Usually the defaults on DB creatation are insanely low and need adjusted much higher, but not so high that swapping / paging ensues.