Pocadotty has asked for the wisdom of the Perl Monks concerning the following question:

I am new to writing multi-threaded Perl applications, but I am not a total n00b to perl... well maybe just a kinda n00b ;)

In the past when I wanted to implement a timeout event, I used alarms. But I read in the docs that there can only be one alarm running, so that will not do for my multi-threaded program with several worker threads all needing their own alarm concurrently.

The tasks my worker threads are performing is running bash test scripts that they receive from an action queue, and reporting the status of tests to a finished queue. The scripts run time ranges from minutes to several hours, but the stdout should change for each script relatively often. If the stdout does not change for a given action for a significant period of time, then the bash script has frozen, so i need to record the fail and run the next script.

Here is the general idea of the approach I would take in a single process program.

eval { open (HANDLE, " $cmdstring |"); alarm $timeout; while (HANDLE) { #process and record info from command's stdout alarm $timeout; #update alarm } alarm 0; }; if ($@) { #timeout occurred #record the fail } else { # do normal stuff }

I have thought of writing system times to a shared variable for each worker, and then run another thread to work as a watchdog for the other processes, and kill threads as they hang, but this would be very messy and I would have to create new threads to fill the place, which could result in some memory nightmares and other issues...

Replies are listed 'Best First'.
Re: Thread Safe alarms?
by Corion (Patriarch) on Apr 14, 2011 at 07:09 UTC

    As alarm is usually implemented using signals (except on Windows, where signals don't exist), mixing threads and alarm is a bad idea (due to mixing threads and signals being a bad idea).

    If you are on Windows, alarm is implemented by sending a Windows window timer message to the main thread. This won't interrupt long running operations in the main thread but will otherwise be threadsafe.

    I would look again some more at the idea of the monitor thread, which monitors the execution (time) of the worker threads. Note that killing a thread also isn't really a nice thing to do, and can leave resources allocated until the main Perl process exits.

    If you are on Unix, have you looked at Parallel::ForkManager and/or runN? On Windows, I would avoid fork (because it is, again, implemented using threads) in favour of system(1,@cmd), which launches a process separate from your main program.

      As far as Unix vs windows, ideally both. Ultimately there are 4 target OS's for this program to run on: SUSE 11, RHEL 6, Win Server 2008, Win Server 2008R2

      I am developing on a client box running OpenSUSE 11.4, but the final code will live on servers.

      I can of course make code changes to port between Unix and Windows, but I would prefer to keep this at a minimum since I will also have a significant task to port the test scripts to run in windows.

      I suppose one approach would be to capture the process ID returned from openN or runN, and pass that to a watchdog thread with seconds since epoch. The watchdog thread would watch the time on each process id, and kill the system process rather then the perl thread. Once the process is killed, the handlers will cause the while loop to exit and the thread to continue from the hang. Somewhat like:

      $pid = open2($chldout, $chldin, $cmdstring); while(<$chldout>){ #process script output strings $mytime=time; watchdogQ->enqueue("$pid $mytime"); #"pet the watchdog" } #use $? to determine timeout case or not

      is openN / runN implemented by launching a separate process from my main program, or a thread like implemented? is this approach thread safe?

        This is a somwhat simpler approach and is thread-safe:

        #! perl -sw use strict; use threads; use threads::shared; our $T //= 5; my $pid :shared; my( $t ) = threads->create( sub{ my @input; $pid = open my $in, '-|', q[ perl -wle"$|++; print() and sleep 1 for 1.. 5" ] or die $!; push @input, $_ while <$in>; return @input; } ); sleep $T; kill 21, $pid; my @input = $t->join; print for @input; __END__ c:\test>junk82 -T=4 Terminating on signal SIGBREAK(21) 1 2 3 4 c:\test>junk82 -T=5 Terminating on signal SIGBREAK(21) 1 2 3 4 5 c:\test>junk82 -T=6 1 2 3 4 5

        It does have a caveat that you may loose some output buffered by the child process, but is otherwise quite reliable and should work wherever threads do.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re: Thread Safe alarms?
by cdarke (Prior) on Apr 14, 2011 at 08:51 UTC
    You might be able to use condition variables and cond_timedwait. See threads::shared.

      That would require you to be able to be in a wait state on the condition var and simultaneously waiting for IO that will never complete. Which isn't currently possible.

      This is a recurrent problem for which Perl has no good solution. What is required is a asyncRead() function.

      This is possible at the OS level on all modern OS's. aio_read() in *nix; ReadFile() with an OVERLAPPED struct on Windows. etc. But AFAIK, these have never been made available from Perl or CPAN.

      There are several modules on CPAN that have AIO in their titles, but to the best of my knowledge they all mock up true asynchronous IO using event loops and other user space dispatchers, which makes them flaky and non-portable and (again, to the best of my knowledge) unsuitable for use with threads.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.