ctaustin has asked for the wisdom of the Perl Monks concerning the following question:

I have written a cgi script that allows the users to ftp a file over to the server, loads the contents into a table and then uses these values to update another table.

The ftp and data load is working really well and quickly. However, when I issue the update statement the program just hangs and eventually gives me a "server cannot be found" error.

If I comment out the update statement then the program works again without any issues.

My test is only trying to update ten records, which when executed via regular sql plus performs normally (i.e. quickly).

I define the update statement through an if statement and then issue:
$sth=$dbh->do("$sql")|| &error("Unable to perform update statement:<br /><br />$sql<br /><br / +>$DBI::errstr\n"); $updated=$sth->rows; $dbh->commit;
After commented out the update statement, I printed $sql to the screen. The syntax is correct as runs as is in sqlplus.

Any ideas or suggestions?

Thanks. Tony

Replies are listed 'Best First'.
Re: dbi update statement is very slow
by metadatum (Scribe) on Aug 14, 2002 at 15:43 UTC
    Do you perform a commit after doing the data load? Which database are you using? Can you post your script?
Re: dbi update statement is very slow
by dws (Chancellor) on Aug 14, 2002 at 16:51 UTC
    Any ideas or suggestions?

    Back up and look at where you're creating that statement handle ($sth, in the $i<$x loop). You're possibly creating several statement handles, you're using them without any error checking, and you're only invoking $sth->finish() on the last one created. Then you're trying to reuse the finished statement handle for the failing do(). I recall Oracle being quite sensitive about finishing statement handles.

    So, rewire your logic a bit so that you finish() each handle you acquire, and then acquire a fresh statement handle to do the update.

Re: dbi update statement is very slow
by tommyw (Hermit) on Aug 14, 2002 at 15:50 UTC

    Have you actually checked the connect statement for errors? It seems quite likely that the program is telling the truth, and the server cannot be found. Do you have any other DBI using scripts running from this server?

    --
    Tommy
    Too stupid to live.
    Too stubborn to die.


      Database: Oracle (8.1.7.3)

      I am not performing a commit after the initial data load because I am inserting into a transaction level temporary table so a commit will delete the data.

      We have several DBI scripts that make calls to various oracle database, this specific instance included, so I know that everything works.

      Here is the sub routine that is doing the work:

      sub load_data { $dbh=DBI->connect("dbi:Oracle:$db","$luser","$pass",{ PrintError=>1,AutoCommit => 0 }) or &error("cannot connect: $DBI::errstr\n"); #open(INFILE,">$dir")||&error("Cant open $dir: $!"); open (FILE,$dir)||&error("Can't open $dir: $!"); while ($line=<FILE>) { chomp($line); ($id, $value)=split/\|/, $line; push (@edits,$line); } close FILE; $x=@edits; $i=0; while ($i<$x) { ($id,$value)=split/\|/, $edits[$i]; $q=qq{insert into convert.sql_edits values (\'$id\',\'$value\' +)}; $sth=$dbh->prepare($q); $sth->execute; push(@total,$sth->rows); $i++; } $xtotal=@total; $sth->finish; if ($type eq "other") { $table="convert.b"; } elsif ($type eq "input"){ $table="convert.i"; } elsif ($type eq "output"){ $table="convert.o"; } elsif ($type eq "standard"){ $table="convert."; } if ($field eq "p1h_secondary_dx") { $sql="update $table"."dx$proc a set $field=(select trim(edit_value +) from convert.sql_edits b where a.counter=b.id) where exists (select 1 from convert.sql_edits b where a.co +unter=b.id)"; } elsif ($field eq "p1h_admit_date"||$field eq "p1h_admit_dx"||$field eq + "p1h_admit_hour"||$field eq "p1h_admit_phys"|| $field eq "p1h_attending_phys"||$field eq "p1h_birth_date"||$field + eq "p1h_consulting_phys1" ||$field eq "p1h_consulting_phys2"||$field eq "p1h_consulting_phys +3"||$field eq "p1h_drg_hosp" ||$field eq "p1h_hosp_admit_source"||$field eq "p1h_hosp_admit_typ +e"||$field eq "p1h_hosp_admit_type" ||$field eq "p1h_hosp_disch_disposition"||$field eq "p1h_hosp_pat_ +type"||$field eq "p1h_hosp_patient_type" ||$field eq "p1h_hosp_payor_code"||$field eq "p1h_hosp_reimb_serv_ +code"||$field eq "p1h_hosp_service_code" ||$field eq "p1h_mdc_hosp"||$field eq "p1h_prev_disch_date"||$fiel +d eq "p1h_prin_dx"||$field eq "p1h_race" ||$field eq "p1h_readmit_flag"||$field eq "P1H_REFERRING_PHYS"||$f +ield eq "p1h_reimb_amt"||$field eq "p1h_sex_code" ||$field eq "p1h_site_of_reimb_calc"||$field eq "p1h_tissue_result +") { $sql="update $table"."hdr$proc a set $field=(select trim(edit_valu +e) from convert.sql_edits b where a.header_counter=b.id) where exists (select 1 from convert.sql_edits b where a.he +ader_counter=b.id)"; } elsif ($field eq "p1h_procedure_code"||$field eq "p1h_px_date"||$field + eq "p1h_surgeon"){ $sql="update $table"."epi$proc a set $field=(select trim(edit_valu +e) from convert.sql_edits b where a.counter=b.id) where exists (select 1 from convert.sql_edits b where a.co +unter=b.id)"; } elsif ($field eq "p2h_chg_amt"||$field eq "p2h_cpt_code"||$field eq "p +2h_ordering_phys_code"||$field eq "p2h_serv_unit_count" ||$field eq "p2h_service_date") { $sql="update $table"."det$proc a set $field=(select trim(edit_valu +e) from convert.sql_edits b where a.trailer_counter=b.id) where exists (select 1 from convert.sql_edits b where a.tr +ailer_counter=b.id)"; } $sth=$dbh->do("$sql")||&error("Unable to perform update statement:<br +/><br />$sql<br /><br />$DBI::errstr\n"); $updated=$sth->rows; $dbh->commit() || &error("Unable to commit changes made by the update +statement: <br /><br />$sql<br /><br />$DBI::errstr\n");; $sth->finish; print $co->header(); print "<html><head><title>Complete</title> </head> <body> <h3>Summary</h3> <blockquote> <h5>Upload</h5> <blockquote> The file <b>$the_file</b> was successfully uploaded.<br /> +<br /> The destination file <b>$dir</b> was created. <br /><br /> </blockquote> </blockquote> <blockquote> <h5>File</h5> <blockquote> $xtotal records were loaded into the convert.sql_edits + table<br /><br /> </blockquote> </blockquote> <blockquote> <h5>Updates</h5> <blockquote> The update statement run was:<br /><br /> $sql <br /><br /> A Total of $updated records were updated<br /><br /> </blockquote> </blockquote>"; system ("rm $dir"); $dbh->disconnect or warn "disconnect failed: $DBI::errstr\n"; print "You have been disconnected from the database. All data in conv +ert.sql_edits for your current session has been automatically deleted.<br /><br /> </body> </html>"; exit; } #end load_data

      Thanks.

        Found it!

        First, clean up all the preamble: The while loop at the top can be rewritten as:

        open (FILE,$dir) || &error("Can't open $dir: $!"); $sth=$dbh->prepare("insert into convert.sql_edits values (?, ?)"); while (<FILE>) { chomp; ($id, $value)=split/\|/; $sth->execute($id, $value); push @total, $sth->rows; } $sth->finish; close FILE;
        since you're not using any of the values read (other than via the database).

        The subsequent SQL statement building can be simplified, since you're using $type and $field to determine the table name, and the rest of the SQL statement doesn't vary.

        And having cleaned sheer volume of setup out of the way, and thus determined that it can't possibly be the cause of the problem, the only part left is:

        $sth=$dbh->do($sql) || &error("Unable to perform update statement:<br /><br />$sql<br />< +br />$DBI::errstr\n"); $updated=$sth->rows; $dbh->commit() || &error("Unable to commit changes made by the update statement: <b +r /><br />$sql<br /><br />$DBI::errstr\n");; $sth->finish;

        But the documentation says the $dbh->do returns a row count, not a statement handle. So there you go.

        Finally, why bother with the convert.sql_edits table anyway? For every record in the table being updated, get the counter value, look it up in @edits, and set the appropriate field. That'd probably save a lot of grief: you'd have one database call per record updated, rather than one per convert line, which may or may not be better, but they wouldn't all have to be deleted afterward, either.

        --
        Tommy
        Too stupid to live.
        Too stubborn to die.

Re: dbi update statement is very slow
by ctaustin (Sexton) on Aug 15, 2002 at 14:52 UTC
    Thanks for the suggestions. I have made some of the modifications suggested specifically changing the insert statement loop and cleaning up my statement handles.

    I still wanted/needed to use the intermediate table so that I could take advantage of an index. The target table to update can contain 40K + unindexed records. Trying to use an update statement like update owner.table set field=new value where xyz=123 is extremely slow and would mean that I am doing a full table scan of these very large tables for each update statement.

    By using the temporary table I can take advantage of an index on the counter value and scan the table only once. This proves to be much more efficient.

    Things seem to be working much better. I ran an update of 15,000 records in a little over a minute (this includes the ftp, load and update). An update of 41,000 still hung on me, but I haven't been able to repeat that.

    I appreciate all the help, I think I am good to go.

    Regards, Tony