Is this too complicated for you?
#! perl -slw use strict; use threads; use threads::shared::Scalar; my $shared = threads::shared::Scalar->new; for ( 1.. 10 ) { async { sleep 1 + rand 3; $shared->value .= ":$_"; }; } $_->join for threads->list; print $shared->value; __END__ C:\test>579015 :1:2:3:5:6:9:4:7:8:10 C:\test>579015 :1:2:6:9:3:5:7:10:4:8 C:\test>579015 :4:5:6:7:8:9:3:1:2:10
Update: A slightly simpler version:
#! perl -slw use strict; use threads::shared::Scalar; our $T ||= 10; our $N ||= 1000; my $shared = threads::shared::Scalar->new; for( 1 .. $T ) { async{ for( 1 .. $N ) { $shared->value++; } }; } waitall; print $shared->value; __END__ C:\test>sharedScalar.plt -T=1000 -N=1000 1000000 C:\test>sharedScalar.plt -T=500 -N=1000 500000 C:\test>sharedScalar.plt -T=500 -N=12345 6172500 C:\test>sharedScalar.plt -T=123 -N=10 1230
In reply to Re^5: Could there be a ThreadedMapReduce (instead of DistributedMapReduce)?
by BrowserUk
in thread Could there be ThreadedMapReduce (and/or ForkedMapReduce) instead of DistributedMapReduce?
by tphyahoo
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |