in reply to Preventing database handles from going stale

Beware - that might not be a timeout. That's very similar to the error you see from MySQL when you try to use the same connection in two processes (usually as a result of forking). If your process does any forking I suggest you take a close look at how you're managing connections - using DBI's trace functionality can help here since it will show you which handle is being used by each process.

-sam

  • Comment on Re: Preventing database handles from going stale

Replies are listed 'Best First'.
Re^2: Preventing database handles from going stale
by dsheroh (Monsignor) on Feb 05, 2007 at 18:47 UTC
    Oooooh... Thanks for the tip! I am forking, but the child process should just be regenerating a graphic using in-memory data, then exiting without touching the db. I'll double-check to be sure that it's not trying to use the connection while doing any of its work.

    Oh. Wait. I just remembered something.

    I'm using a homegrown DBI-helper module that exports a $dbh and has centralized db connect code and so forth so all my other modules don't have to worry so much about that stuff. It also has an

    END { $dbh->disconnect; }
    block. The forked children aren't using the db for anything, but if they execute that END block when they exit...

      Have the child set $dbh->{InactiveDestroy} = 1; and then undef the $dbh. That should keep it from disconnecting the parent's connection. After that, you'll just have to fix the END block so it doesn't try to call disconnect on an undefined value.

      END { $dbh->disconnect if $dbh; }

      I wrote about this and more in DBI, fork, and clone..

        Excellent suggestion and some good info in your linked writeup, too. Thanks!

      but if they execute that END block when they exit...

      They do. Even without the END block, the problem would still exists since the child would call $dbh's destructor which calls disconnect (after issuing a warning).

      Sorry, I don't have any good solution short of connecting after the fork. Maybe you can create a pool of reusable children early on? Maybe you can exec within the child, even if only to relaunch Perl? Maybe you can add to a job list which monitored by cron job instead of forking?