True ... but Hadoop scales linearly, meaning what used to take multiple hours or days to run now only takes a few hours, maybe even a few minutes.
So does the server/clients scheme. The difference is in the level of control.
Such termination becomes trivial.
For some types of processing. For other types, the cost of throwing away the results of a job when it is 99% complete and starting over can be very high.
I do not know how familiar you are with Hadoop/cloud computing.
Not so much. But it isn't so different with stuff I was doing 15 years ago on a server farm.
In reply to Re^5: randomising file order returned by File::Find
by BrowserUk
in thread randomising file order returned by File::Find
by jefferis
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |