in reply to Re: Perl code to a multi-processor slave card: Total mPower 2
in thread Perl code to a multi-processor slave card: Total mPower 2

The software that I'd like to run spits out data manipulations on a single very large data set (Well, alright, in the scale of things 70M isn't that large - but having to use multiple instances of a 70M structure?). Right now it runs in a single process (on a single processor), but I'm hoping to speed the whole mess up by splitting the component parts over processes (and thusly processors, by loading individual process binaries to a processor), but for it to work it would need to act on the same data set in memory.
Having multiple sets would require they some how stay consistent, and a hell of a lot more memory than that card has on it.

Communicating via processes using TCP/UDP would work, certainly, and is a good idea. I will investigate this further.
I'm not sure what you mean by 'We used stream'. Details?

As for speed; Keep in mind, this is a personal project - something I'm doing for "fun". I'm not so fussed about raw blazing speed, but rather to get it working on a platform somewhat faster than the one it runs on currently (A 200mHz Indy)

-- Alexander Widdlemouse undid his bellybutton and his bum dropped off --