in reply to distributed processing

In the easy case that data distribution is no problem for you, because all machines access the data on the same (nfs or NAS) share, simply running the program on the different machines through runN and ssh -c is likely the easiest solution. This doesn't give you the fancy job status overview, throughput charts or automatic load balancing or job restarts, but on the other side, it's just a script and the effort of setting up passwordless keys to the other machines.

Alternatively, you could look into what bashreduce does, and consider how to adapt that for your case, or look into the Perl modules for GRID or SSH, or even a job queue like Gearman or TheSchwartz.