Hi @all
I'm trying to parallelize some Perl-code which serves as glue for some numerically intensive Fortran-executables. I've gone through the description of the threads-module about how to start threads and how to wait until they end. But my problem seems to be untouched by that despite it's a problem which (i presume) many others may have experienced. I want to use a boss/worker-type Model in which 4 workers (in which another script, in which the Fortran-executables are called, is called) should be running all the time. My problem is that i don't see how i can check if ANY of the 4 workers has finished. I have tried to set a flag to zero when the thread starts and to 1 when it has accomplished its task. But that seems to me like a rather dirty (and possible thread-unsafe) Method. Than i thought about using something like @array = threads->list and than reading out #$array in intervals of a second, but that somehow did not work. The last Method i tried is in the code below (having a variable count) that did not work too. Is there nothing better out there (possibly "out of the box")?
Thanks
use File::Find; use File::Copy; use File::Path; use threads; use Thread::Queue; my $FilenameQueue = Thread::Queue->new; my $FilenamewithpathQueue = Thread::Queue->new; my $DirectorynameQueue = Thread::Queue->new; my $ThreadnumberQueue = Thread::Queue->new; my $threadnumber = 0; my $thread; my $maxnumberofworkers = 3; if ($#ARGV != 0) {die "Usage: Perl runall.pl [storagedirectoryname]\n" +;} my $storagedirectory = $ARGV[0]; my $workbasedirectory = $storagebasedirectory . "/" . $storagedirector +y; mkdir($workbasedirectory); print "Starting recursive directory search!\n"; find(\&docalc, $basedirectory); sub docalc { /\.txt$/ or return; while ($threadnumber >= $maxnumberofworkers){ sleep(1);} $FilenameQueue->enqueue($_); $FilenamewithpathQueue->enqueue($File::Find::name); $DirectorynameQueue->enqueue($File::Find::dir); $thread = threads->new(\&worker); $threadnumber++; $thread->detach; print "New thread created!\n";} sub worker { my @filename = split(/\./,$FilenameQueue->dequeue); print "Processing $filename[0]!\n"; my $workdirectory = $workbasedirectory . substr($DirectorynameQueue- +>dequeue,length($basedirectory)) . "/" . $filename[0]; $workdirectory=~ s/\n//gi; print "Creating directory" . $workdirectory . "\n"; mkpath($workdirectory); copyfiles($workdirectory, $FilenamewithpathQueue->dequeue, $filename +[0]); print "Processing Calculations...\n"; open(RUNSCRIPT,">runscript.bash"); print RUNSCRIPT "cd $workdirectory\nperl scriptthatincludesfortran.p +l >& $workbasedirectory/$filename[0].log"; close(RUNSCRIPT); if (system("bash runscript.bash") != 0) { die "failed!\n";} else { print "done!\n";} print "bash $workdirectory/runscript.bash\n"; print "\n"; $threadnumber--;}

In reply to Parallelization of heterogenous (runs itself Fortran executables) code by Jochen

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.