Jambo Hamon has asked for the wisdom of the Perl Monks concerning the following question:

#!/usr/bin/perl =begin I interpret the calls of the sub spin below as ``promises''. The spin sub calls "enter" which guarantees that the block will run, but inside that block now we call spin again, which has to guaran +tee that another block will run... Shades of indiffernce might wash over your f +ace, but take a look! It's cool. El Capitano, Jambo Hamon P.S In the context of the last post this is the first ``bit''. P.P.S Si =cut use strict; use warnings; use v5.20; # oh yeah use Time::HiRes qw(usleep); use Data::Dumper; use constant ITER => 200; use constant RAND_SLEEP => 200; use Mutex; my $red = Mutex->new; my $yellow = Mutex->new; my $green = Mutex->new; $red->lock; $yellow->lock; $green->lock; sub spin { # Mutex, subroutine reference, ...Jazz return shift->enter( shift, @_ ); } my $pid = fork; my %t; $t{starttime} = time; # stats data hash symbold lookup .. what scope is this in onc +e we fork? die "$!: who knows me?" if not defined $pid; if ( $pid == 0 ) { #child: fork1 say "fork1 here: $$"; my $pid = fork; die "$!: who knows me?" if not defined $pid; my $cnt; # what scope does $cnt get thrown into... if there onl +y were.. if ( $pid == 0 ) { #child: fork2 say "fork2 here: $$"; # sleep 1; while ( $cnt++ < ITER ) { # usleep int(rand(10)); spin( $red, sub { #say "fork2\\ waiting yellow"; spin( $yellow, sub { say "fork2: red yellow"; $t{fork2}->{r}++; $t{fork2}->{ry}++; $t{fork2}->{y}++; usleep int( rand(RAND_SLEEP) ); } ); } ); spin( $red, sub { spin( $green, sub { say "fork2: red green"; $t{fork2}->{r}++; $t{fork2}->{rg}++; $t{fork2}->{g}++; usleep int( rand(RAND_SLEEP) ); } ); } ); } } else { # fork1 # sleep 2; while ( $cnt++ < ITER ) { # usleep int(rand(5)); spin( $red, sub { # say "fork1\\ waiting green"; spin( $green, sub { say "fork1: red green"; $t{fork1}->{r}++; $t{fork1}->{rg}++; $t{fork1}->{g}++; usleep int( rand(RAND_SLEEP) ); } ); } ); spin( $red, sub { spin( $yellow, sub { say "fork1: red yellow"; $t{fork1}->{r}++; $t{fork1}->{ry}++; $t{fork1}->{y}++; usleep int( rand(RAND_SLEEP) ); # not nece +ssary but thoughtful } ); } ); } wait; } } else { #mothersbaugh ship my $cnt; # sleep works wonders # sleep 2; while ( $cnt++ < 1 ) { # no one can get red, cos it's locked until i unlock it. PID bas +ed switch. # only true on the first iteration. spin( $red, sub { spin( $green, sub { spin( $yellow, sub { say "mothersbaugh: red green yellow"; sleep 1; $t{mothersbaugh}->{r}++; $t{mothersbaugh}->{rg}++; $t{mothersbaugh}->{g}++; $t{mothersbaugh}->{gy}++; $t{mothersbaugh}->{rgy}++; # aw shg +ucks usleep int( rand(RAND_SLEEP) ); # no +t necessary but thoughtful } ); } ); } ); # you can have similar loops here: # now race for red, then wait for yellow # spin($red, sub { #say "mother\\ waiting yellow"; # spin($yellow, sub {say "mother: red yellow"; usleep int(ra +nd(10));} ) }); #now race for red, then wait for green # spin($red, sub { #say "mother\\ waiting green"; # spin($green, sub {say "mother: red green"; usleep in +t(rand(10));} ) }); } wait; } END { # Let the GLOBBLING BEGIN print Dumper( \%t ); }

Replies are listed 'Best First'.
Re: Why won't this Deadlock?
by marioroy (Prior) on Jul 11, 2017 at 06:07 UTC

    Hi Jambo Hamon,

    Regarding Mutex (standalone module) and MCE::Mutex, multiple calls to mutex lock or enter by the same process or thread are safe from within dynamically nested scopes. The behavior is similar to locking using threads and threads::shared.

    The hash may be shared. Below is the diff output.

    $ diff orig.pl script.pl 33a34 > use MCE::Shared; 51c52 < my %t; --- > tie my %t, 'MCE::Shared'; 217c218 < print Dumper( \%t ); --- > print Dumper( tied(%t)->export( { unbless => 1 } ) );

    The shared hash resides under the shared-manager process. Deep-sharing a non-blessed structure recursively is handled automatically via the TIE-shared interface. To see the key-value pairs, call the export method. The unbless option unblesses any shared array, hash, and scalar object to a non-blessed array, hash, and scalar respectively.

    Regards, Mario

      Hi Mario,

      I'm going on vacation soon, so I'll print on the manuals to MCE::* and try to comment on how this research compares.

      DESCRIPTION A Hobo is a migratory worker inside the machine that carries the asynchronous gene. Hobos are equipped with "threads"-like capability for running code asynchronously. Unlike threads, each hobo is a unique process to the underlying OS. The IPC is managed by "MCE::Shared", which runs on all the major platforms including Cygwin.
      It seems you are way ahead of me in thinking about this. That's a cool venture (MCE::*); I see tonnes of applications for it.

      I appreciate the diff. Thank-you.

      Do you have terminology for the scope of data shared between processes?

      So PIE in the sky question is...

      How would I have a data shared for the lifetime of a block, and I want that block to run first, amongst all the processes that share it?

      { use MCE::Shared; my $cnt; #tie my $cnt, 'MCE::Shared', { module => 'MCE::Shared::Scalar' }, 0; tie my $va1, 'MCE::Shared', { module => 'MCE::Shared::Scalar' }, sub { + return $cnt++; }; #just for fun let's make a closure # How can I be sure that ($val->() == 0)? say ($val->()) ? 0 : 1; # says 1 if first caller }
      My Brain is saying that setting a mutex to protect the call to $val would be a recursive solution with no end? Eg. No atomic-ity, eg deadlock. Yeah, if it was synchronous it would work, but it's not.

      I said deadlock is on the horizon because the block could be wrapped in another block with a different mutex controlling the lower one. Got to run to an appointment. I'm going to be late. All the best, Jambo

        Hi Jambo Hamon,

        Enjoy your vacation. When you come back, I'd likely have MCE 1.830 (bug fixes) and MCE::Shared 1.827 released on CPAN. The OO interface for shared objects makes it possible to not worry about mutex at the application level.

        use MCE::Shared; my $val = MCE::Shared->scalar(0); $val->incr(); $val->incrby(20); $val->set(40); my @pairs = ('aa'..'zz'); my $oh = MCE::Shared->ordhash(); my $ha = MCE::Shared->hash(); $oh->assign( @pairs ); $ha->assign( @pairs ); $ha->set( counter => 0 ); $ha->incr('counter'); $ha->incrby('counter', 2); my $val = $ha->get('counter'); # Scoping works similarly to lexical scoping in Perl. # The shared object is destroyed upon leaving the scope. { my $ar = MCE::Shared->array(1..9); $ar->assign('aa'..'zz'); }

        Passing a code reference via IPC isn't supported at this time. Perhaps, I can add filter support for STORE (set), FETCH (get) at a later time. I'm currently at a code freeze with MCE 1.830 and MCE::Shared 1.827.

        use MCE::Shared; my $obj = MCE::Shared->scalar(0); $obj->filter_store_value( \&code ); $obj->filter_fetch_value( \&code );

        Regards, Mario

Re: Why won't this Deadlock?
by Marshall (Canon) on Jul 11, 2017 at 05:48 UTC
    Your code looks pretty bizarre to me.
    Can you explain in more detail what you want to accomplish?
      Hi Marshall,

      Being short of time, I'm thinking of a shade of mod_perl on Steriods.

      This example is a synchronization primitive, called a promise.
      Or at least it looks like that.

      I've got some more posts coming...

      Were you able to run the example?
      Did it deadlock for you?

      Cheers,
      Jambo

        Update: Changed max_age from 3 seconds to 1 hour. Was missed before posting.

        Hi Jambo Hamon,

        The following is a FCGI::ProcManager + MCE::Shared demonstration. One can make use of MCE::Shared and have a fast cache. Perhaps a nosql-like object is handy as well for session data during multi-page HTTP requests.

        The MCE::Shared::Cache module is a hybrid LRU-plain implementation.

        The MCE::Shared::Minidb nosql-like module is Redis-like for the API.

        #!/usr/bin/perl # http://127.0.0.1/cgi-bin/test_shared.fcgi # http://127.0.0.1/cgi-bin/test_shared.fcgi?param1=foo&param2=baz use strict; use warnings; use MCE::Shared; use CGI::Fast; use FCGI::ProcManager; # construct any shared objects and mutexes (optional) my $cache = MCE::Shared->cache( max_keys => 500, max_age => 3600 ); my $count = MCE::Shared->scalar( 0 ); my $nosql = MCE::Shared->minidb(); # launch FCGI manager next my $proc_manager = FCGI::ProcManager->new({ n_processes => 4 }); $proc_manager->pm_manage(); # calling init enables parallel IPC for data flow # worker is assigned 1 of 12 channels MCE::Shared->init(); # at last, the loop while ( my $query = CGI::Fast->new() ) { $proc_manager->pm_pre_dispatch(); print "Content-type: text/html\r\n\r\n"; print "<hr>\n"; print "$_ = $ENV{$_}<br>\n" foreach sort keys %ENV; print "<hr>\n"; my %params; foreach ( sort $query->param() ) { $params{$_} = $query->param($_); print $_, " = ", $params{$_}, "<br>\n"; } print "<hr>\n"; print "$$: ", $count->incr(), "<br>\n"; my $val = $cache->get('foo') // do { $cache->set( foo => 'bar'.$count->get() ) }; print "$$: ", $val, "<br>\n"; $proc_manager->pm_post_dispatch(); }

        For maximum performance, ensure Perl has Sereal::Encoder/Decoder 3.015+ installed. IO::FDPass 1.2+ is beneficial if wanting to construct a shared queue.

        All was done in MCE::Shared::Cache and MCE::Shared::Minidb to run with low memory consumption and maximum performance. For example, MCE::Shared::Cache makes use of dualvar to hold the time expiration along with the key internally. Only enable what you want. Basically, do not enable max_age if not needed for maximum performance.

        The OO interface for shared objects saves you from having to handle mutex at the application level, unless of course wanting to wrap a mutex (enter) around multiple shared actions.

        Regards, Mario