The trouble with your lockstep game is that each step is dependant upon a single previous step--but most operations in computing have two operands.
With your chain of events (a->b->c->d->e->f), there is no opportunity for parallelisation, but with every two operand operation, there is such an opportunity.
The thing to remember is that we are talking about Perl 6 and Parrot. At the Parrot level, every Perl level operation becomes a method call. That means that for every non-simple expression (ie. any expression that involves anything other than directly referencable simple scalar operands), there is the opportunity to overlap the execution of the derivation of those operands.
I'll try to clarify that. In these expressions:
my $scalar = $untied_var1 <op> $untied_var2; my $scalar = func1() <op> $untied_var; my $scalar = $untied_var <op> func1();
There is no opportunity for parallelisation, because there is nothing to overlap.
However, in these expressions:
my $scalar = func1() <op> func2(); my $scalar = $tied_var1 <op> $tied_var2; my $scalar = $tied_var <op> func1(); my $scalar = func1() <op> $tied_var; my $scalar = $obj1->method() <op> $obj2->method(); # and all the other combinations
There is an opportunity for parallelisation.
The processing involved in calculating the operands to <op> could involve a db access, or network IO or File IO or just a long mathematical calculation that would benefit from using a second cpu, or a floating point calculation that could be executing within the FPU whilst the main CPU does something else.
Some of this type of paralellism is already performed by the pipelining and hyperthreading architectures, but it is very limited--by locality--to sequential steps within a given routine.
In order to extend the acceptance of the opportunites, it traditionally requires the programmer to manually perform the steps required to asynchronously invoke the overlappable routines and then perform the resyncing required to pull the results back together. This is tedious and error prone. If these steps can be automated, then the potential of the soon-to-be-ubiquitous availablity of multi-cored, multi-processing arcutectures can be more easily and reliably utilised.
Now imagine that there was some mechanism for indicating that a subroutine or method may take some time to calculate or obtain, and could benefit from being overlapped. Ignore what that mechanism would be, or under what circumstances it could be used for now.
The compiler now knows that when it encounters two such subroutines or methods as the source of operands to a single, non-serialising operation, it can make those calls asyncronously and await their completion before using the results the produce as arguments to the operation in question. The compiler writers can get the mechanism right and it will be used whenever and whereever the opportunity arises. That is transparency.
But the compiler can only do this, if the programmer can clearly indicate that there are no dependancies--and that requires that EO be defined.
In reply to Re^34: Why is the execution order of subexpressions undefined? (magic ruts)
by BrowserUk
in thread Why is the execution order of subexpressions undefined?
by BrowserUk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |