But this is one of the hard problems because of Turing-Completeness/Halting-Problem. What we'd really like to do is outlaw those expressions in the first place. But if the compiler was smart enough to recognize all of those instances, then there are also programs that would cause the compiler to never finish executing :-(
But again, there is no need for the compiler to have to try and recognise all those situations, if the EO is defined, the programmer can make the decision because he will know what the EO is, and therefore know what the effect will be. No guessing. No halting problem need arise.
It's not a complicated or extravagant request:
The alternative is:
But this is one of the hard problems because of Turing-Completeness/Halting-Problem. What we'd really like to do is outlaw those expressions in the first place. But if the compiler was smart enough to recognize all of those instances, then there are also programs that would cause the compiler to never finish executing :-(
Not so. For more information, take a scan at Parallelization of simple loops and Concurrent Programming and Erlang
I have looked (quite hard) at Haskell, and as Autrijus has shown with Pugs, in the right hands and for the right type of project, it is an amazing language.
However, there are aspects of it--particularly relating to efficiency, and the way it attempts to model the world with mathematical symbolism (pasting over the crack with (what I consider to be) smoke and mirrors with fancy names when the real world doesn't fit it's rather restrictive view of it--that disturb me and leave me cold.
Erlang is also a functional language, but it embraces the non-comformity of the world to an idealistic viewpoint, and fairly revels in providing simple and well-thought out ways of dealing with messy concepts like IO and concurrency.
Not only does it not need to idealise the world, preferring a take a very pragmatic approach to messyness of reality--doesn't that remind you of a the LW quote about the messiness of the problem space and backronym meaning of the P in Perl?--, it does so in a way that is both concise in construction and efficient in use.
There has just been the announcement about AMD beating Intel to the punch on shippping multi-core, 64-bit processors. To make best efficient use of these beasts, it becomes necessary to embrace parallelisation within the Process-space. Ie threading. Making good use of threading will become paramount in making effective use of them (multi-core processors).
To do this requires not only that the language give access to threads, but that it is written with using them in mind. The mono-directional nature of IPC availaible with fork and COW will not be sufficent to enable the full benefits of multiple processors to be utilised for anything beyond the most simplistic client-server arrangements. Equally, the duplicate everything shared approach represented by iThreads is simply unworkable for anything other than the most trivial of applications.
As was seen from the pthreads experiment, users are not very good at thinking non-determanistically and generally make a hash of trying to use semaphores to control access to shared resources.
What is needed is a way to allow seperate part of a program to run in parallel until they reach a point of dependancy when they must synchronise. That is most easily indicated, by referring to the result of two threads of execution within the same expression. If the execution order is defined, then there is no ambiguity about what can be run in parallel, and when they must become synchronised.
In the following crude example:
for my $i ( 1 .. $n ) { $results[ $i ] = $db->getLocalData() * $lwp->getRemoteData(); }
The db object and the $lwp object can both run in parallel. The EOD is on the results, not the method calls.
That is too simple an example to really demonstrate the need for a defined EO, but it illustrates that using compound expressions to control and synchronise parallelised operations is both workable and easy. If you have to utilise separate statements in order to ensure EO, then you loose the simplicity of the compound statement as a means of indicating what operations can be overlapped.
The alternative in Haskell terms I suppose would be to make the results of the db and lwp calls lazyily evaluated and only derive the when you attempt to combine thise results. The problem with that is you can no longer arrange for the first set of results to be available immediately. I'll try and clarify that.
If the db and lwp objects are initialised with the appropriate information when they are instantiated, then they can both go out to their respective resources and (start to) pull the first set of results in preparation for their being required. By the time the main thread reaches the point where those results are required, the may already be available locally. Once those results have been returned to the caller, they can immediately issue their requests for the next batch to their repective external sources, whilst the main thread gets on with using the first set. That way, when the next set of result are required, they are potentially already available locally.
Effectively, the two objects become concurrently runnable, coroutines to the main thread. This type of transparent parallelism is easy to code and easy to understand because the simple act of placing the requests for the results within the same statement is all that is required to indicate that parallelism is allowable-IF the EO is defined.
In reply to Re^18: Why is the execution order of subexpressions undefined?
by BrowserUk
in thread Why is the execution order of subexpressions undefined?
by BrowserUk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |