vancetech has asked for the wisdom of the Perl Monks concerning the following question:

I am programming a server daemon that pre-forks "worker" children to do time consuming work, and pre-forks a "reporting" child to handle sending reports from the "workers" back to the database.

I have a 60 second alarm signal inside of an eval{} statement that encloses the mysql database work. No database work should take longer than a few seconds. After a day or two uptime, the "reporting" child hangs for 1 - 4 hours where nothing takes place and then dies because of the alarm signal. The eval{} statement is unable to trap the die(). Here is my code:

my $dbh; RECONNECT: eval { my $dsn = "DBI:$driver:database=$database;host=$hostname;mysql_con +nect_timeout=10"; $dbh = DBI->connect($dsn, $username, $password, { RaiseError => 1, + PrintError => 0, AutoCommit => 1 }); }; if( $@ ) { goto RECONNECT; } eval { # Setup ALRM handler signal local $SIG{ALRM} = sub { die "timeout\n" }; alarm( 60 ); $dbh->do("INSERT INTO table_name VALUES ( data, ... )"); alarm( 0 ); }; if( $@ ) { if( $@ =~ /timeout/ ) { print "Timed out\n"; } else { alarm( 0 ); } }
How would you go about setting up a foolproof daemon with a constant mysql server connection?

Replies are listed 'Best First'.
Re: Perl daemon with persistant mysql connection problems
by jesuashok (Curate) on Apr 03, 2006 at 09:30 UTC

    First you should know that, we cannot say the db operation will not take much time. you consider the case like, when the db lock happened during that time, if your Insert operation performed then, that will take sufficient amount of time.

    when the DB operation takes place, That is a separate process for your perl script. by the time, if parent raises alarm signal, then, the forked process will become orphan. that is the reason a big messup will happen.

    To handle this situation, please maintain a flag and keeps track of all the operations performed by your script.

    "Keep pouring your ideas"