marcos has asked for the wisdom of the Perl Monks concerning the following question:

I'm writing a UNIX daemon which is used by clients to retrieve some data from an Oracle database.The daemon uses the module Net::Server, and of course uses DBI to connect to Oracle.
My question is about the connection to Oracle. I want to connect to Oracle only once (of course), so in the initialization phase I create a database handler.
When a client makes a request to the daemon, a new child is forked (OK I can use Net::Server::PreFork to have a pre-forked pool of children). The child actually executes the query and save somewhere in the file system the results for the client. This architecture implies that the database handler is shared among all forked children. Is this a problem?
I've noticed that if a child dies (for example because it encounters some problem while saving data for the client) the daemon loses the connection to Oracle. I've read DBI man page and found that there is an attribute for DBI handlers called InactiveDestroy which "It is specifically designed for use in UNIX applications which 'fork' child processes.". Does anyone have any experience with problems of this kind?

Thank you very much.
marcos
  • Comment on Database handler shared among forked processes

Replies are listed 'Best First'.
Re: Database handler shared among forked processes
by mpeppler (Vicar) on Jun 07, 2002 at 18:17 UTC
    I'm not an Oracle specialist, but from what I've read on the dbi-users mailing list it seems that what you are trying to do is not supported.

    You might get away with setting InactiveDestroy in the children, but you're almost certainly better off with one connection in each child.

    This will also avoid problems if two child process try to use the same handle at the same time.

    Note that I've been able to use shared handles among child processes with Sybase::CTlib, but then Sybase specifically states that connection handles can be shared across forks.

    Michael

      Thank you for your reply. I'm afraid that you are right: I'll have to implement one connection to Oracle in each child. Anyway I'm not very pleased with this solution: connection to database is expensive both in term of resources and time ... and if my daemon starts having lots of requests I will end up having a lot of database connections ...
      Perhaps this is an area where Perl is sort of weak? I don't know, I'm a great fun of Perl :)

      Thank you very much marcos
        You can write your system so that your child processes are persistent.

        You then communicate between the master (that gets the requests) and each of the children through some form of pipe (maybe Unix sockets) and the child processes can then avoid re-connecting for each request.

        The code obviously gets more complicated, but it's certainly feasible.

        Michael