dsheroh has asked for the wisdom of the Perl Monks concerning the following question:

I'm working on a daemon which does intermittent bursts of work and commits some of it to a PostgreSQL database. These bursts can sometimes be 10-15 minutes apart and the current bane of my existence is
DBD::Pg::db do failed: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.
I assume this is a timeout issue, but all searches I've done (here, google, etc.) have just turned up information on using signals to set DBI/query timeouts, not how to prevent them. I have also attempted in vain to locate any way of changing/disabling connection idle timeouts via DBI, DBD::Pg, and the Postgres configuration.

I am also frequently seeing

message type 0x43 arrived from server while idle message type 0x5a arrived from server while idle

The best I've been able to achieve so far is to do a $dbh->ping || initialize_database_connection(); right before going into my main processing, which mostly works, but, race conditions being what they are, I still get the DBD::Pg::db error on occasion when $dbh goes stale between the ping and the next query. I had actually been hoping that whatever test DBD::Pg->ping does would itself be enough to keep the connection alive, but no such luck.

Any suggestions on how to keep this db connection from going bad on me?


Update: After a suggestion from samtregar that this might not actually be a timeout issue, it turned out that I wasn't being careful enough when mixing fork and DBI. Thanks to all who offered ideas along the way.

Replies are listed 'Best First'.
Re: Preventing database handles from going stale
by EvanK (Chaplain) on Feb 05, 2007 at 16:52 UTC
    You could use ping() and connect_cached(), wrapped in a subroutine like so:
    sub db_keepalive { # if connection is dead if(!$dbh->ping) { # reopen it $dbh = DBI->connect_cached($data_source, $username, $password, \%a +ttr) or die $DBI::errstr; } }
    Then call that before every query...note that this is untested code.

    __________
    The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
    - Terry Pratchett

      Aside from using connect_cached, that's basically the same as what my current ping || reinit_db is doing and has the same problem: $dbh can go stale after the ping but before the query. Calling it before each query reduces the odds of that happening by reducing the time between the two events, but can't prevent it entirely.
        Hmmm...I assume you're doing the queries through some kind of abstraction layer, rather than directly calling the DBI functions, which is causing the delay? If this is the case, you may need to modify said abstraction layer to do the "ping or reconnect" logic.

        Example code of one of these queries might help.

        __________
        The trouble with having an open mind, of course, is that people will insist on coming along and trying to put things in it.
        - Terry Pratchett

Re: Preventing database handles from going stale
by samtregar (Abbot) on Feb 05, 2007 at 18:37 UTC
    Beware - that might not be a timeout. That's very similar to the error you see from MySQL when you try to use the same connection in two processes (usually as a result of forking). If your process does any forking I suggest you take a close look at how you're managing connections - using DBI's trace functionality can help here since it will show you which handle is being used by each process.

    -sam

      Oooooh... Thanks for the tip! I am forking, but the child process should just be regenerating a graphic using in-memory data, then exiting without touching the db. I'll double-check to be sure that it's not trying to use the connection while doing any of its work.

      Oh. Wait. I just remembered something.

      I'm using a homegrown DBI-helper module that exports a $dbh and has centralized db connect code and so forth so all my other modules don't have to worry so much about that stuff. It also has an

      END { $dbh->disconnect; }
      block. The forked children aren't using the db for anything, but if they execute that END block when they exit...

        Have the child set $dbh->{InactiveDestroy} = 1; and then undef the $dbh. That should keep it from disconnecting the parent's connection. After that, you'll just have to fix the END block so it doesn't try to call disconnect on an undefined value.

        END { $dbh->disconnect if $dbh; }

        I wrote about this and more in DBI, fork, and clone..

        but if they execute that END block when they exit...

        They do. Even without the END block, the problem would still exists since the child would call $dbh's destructor which calls disconnect (after issuing a warning).

        Sorry, I don't have any good solution short of connecting after the fork. Maybe you can create a pool of reusable children early on? Maybe you can exec within the child, even if only to relaunch Perl? Maybe you can add to a job list which monitored by cron job instead of forking?

Re: Preventing database handles from going stale
by dirving (Friar) on Feb 05, 2007 at 17:29 UTC

    You could wrap the code where you are actually using the database in an eval block to trap the error, and then when an exception occurs you can just re-connect and try again. (Assuming you are using the RaiseError option, anyway -- otherwise just check the return value from your DBI calls.)

    -- David Irving
      That would definitely work for handling the timeouts, but I'd prefer to prevent them if possible. (Postgres allows 100 concurrent connections by default and I don't think I've ever used more than 3 of them at a time, so having one just hang out and rarely get used is not an issue.)
Re: Preventing database handles from going stale
by sgt (Deacon) on Feb 05, 2007 at 17:02 UTC

    weird. two things come to mind:

  • You could reconnect every x seconds.
  • Maybe you could also "poll" the db in some event loop with a simple query ("select sysdate from dual;"). It is possible that you would need to fork to do that cleanly (with some synchronization between father/child). Really the DBD should give something.
  • hth --stephan
      I had initially assumed that ping was already doing something like your second suggestion, but apparently not. I've already got a decent scheduling system in place, so setting it up do do either of those every n seconds would be relatively trivial.

      I agree that it seems odd for DBI not to have a generic method (implemented by each DBD) for dealing with this sort of thing.

Re: Preventing database handles from going stale
by Khen1950fx (Canon) on Feb 05, 2007 at 19:40 UTC
    Since you're working on a daemon, you could ensure the connection by using the ensure_connected method in DBIx-Class-Storage-DBI:

    sub ensure_connected { my ($self) = @_; unless ($self->connected) { $self->_populate_dbh; } }