in reply to performance problem with oracle dbd

Recently I had the need to insert 3.16 million rows into SQL Server (bear with me).

I first tried using DBI, and quickly concluded that this was the WWTDI (Wrong Way To Do It).

The right way turned out to be bcp.

Apparently, the Oracle equivalent is SQLLoader.

  • Comment on Re: performance problem with oralce dbd

Replies are listed 'Best First'.
Re: Re: performance problem with oralce dbd
by Anonymous Monk on Apr 16, 2004 at 17:56 UTC
    Good thought. A bulk insert for Oracle should be SQL*Loader, as this bypasses the rollback segmenets (less overhead ).

    Use of bind variables in sql statements can also be a big help, as sql statements can then be parsed (in Oracle) only once. See the DBD::Oracle doc for use of bind variables; some of my example code is below. Example code is for loading binary objects into a DB, but does use bind variables. Also, there's probably a more efficient way to write this, but it worked for me.

    # load modules use DBI; use DBD::Oracle qw(:ora_types); # connect to the db my $dbh = DBI->connect("dbi:Oracle:tlin","USERNAME","PASS") or die "Cant connect to db! $DBI::errstr\n"; $LONG_RAW_TYPE=ORA_LONGRAW; # oracle type id for long raw/blob ( could + also be 24) $bufsize = 400000; # putting this here, since we use it everywhere $dbh->{LongReadLen}=$bufsize; # dont set this too high; performance su +ffers $dbh->{LongTruncOk}=0; # find all the .xls in a dir and load them to db while ( defined(my $filename = glob("c:\\play\\tmp\\*"))){ open (BLOB, "<$filename") or die "Cant open $filename\: $!\n"; binmode(BLOB); my $bytes = 0; my $buf; $bytes = read(BLOB, $buf, $bufsize); if ( $bytes > $bufsize ){ print STDERR "File $filename too large to load. Skipping.\n"; close(BLOB); next; } close(BLOB); # fix the file name; one undef for each part of the path which is +not a file name. my ( undef, undef, undef, $storname ) = split /\\/, $filename; my $sth=$dbh->prepare(" insert into blobtest \( id\, fname\, fobj \) values \( blobid\.nextval\, \'$storname\'\, \:blob \) ") or die "Cant prep\: $DBI::errstr\n"; $sth->bind_param(":blob", $buf, {ora_type=>$LONG_RAW_TYPE}); $sth->execute or die "Cant exec\: $DBI::errstr\n"; print "File\: $filename loaded\n"; };

    More generally, take a look at overall Oracle parameters / tuning. Usually the defaults on DB creatation are insanely low and need adjusted much higher, but not so high that swapping / paging ensues.

Re: Re: performance problem with oralce dbd
by tito (Novice) on Apr 15, 2004 at 18:16 UTC
    Thanks for the suggestion but the real app actually have a mix of different sql statments in it and after porting it from mysql (using DBI/DBD:: mysql) the performance dropped to an unaccpetable level so I created these simple test codes to see where the problem is. The select's are fine but commits for insert/updates are REALLY bad.