in reply to Preventing database handles from going stale

You could use ping() and connect_cached(), wrapped in a subroutine like so:
sub db_keepalive { # if connection is dead if(!$dbh->ping) { # reopen it $dbh = DBI->connect_cached($data_source, $username, $password, \%a +ttr) or die $DBI::errstr; } }
Then call that before every query...note that this is untested code.

__________
The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
- Terry Pratchett

Replies are listed 'Best First'.
Re^2: Preventing database handles from going stale
by dsheroh (Monsignor) on Feb 05, 2007 at 17:31 UTC
    Aside from using connect_cached, that's basically the same as what my current ping || reinit_db is doing and has the same problem: $dbh can go stale after the ping but before the query. Calling it before each query reduces the odds of that happening by reducing the time between the two events, but can't prevent it entirely.
      Hmmm...I assume you're doing the queries through some kind of abstraction layer, rather than directly calling the DBI functions, which is causing the delay? If this is the case, you may need to modify said abstraction layer to do the "ping or reconnect" logic.

      Example code of one of these queries might help.

      __________
      The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
      - Terry Pratchett

        Nope, I am using DBI directly, so the delay is very small. In practical terms, running the ping immediately before each query would probably be something over 99% effective, but it could never be 100% effective without implementing some kind of OS-level transactional capability to guarantee that Postgres doesn't get any processing time between the ping and the query. (And that's without even getting into cases where the db is running on a separate server with an independent CPU...)