in reply to Clustered Perl Applications?

I'm working on a perl application which should run on multiple small hosts, some for storage and some for number crunching. ... Now I'm searching for a clean and fast solution to interconnect these hosts.

"It depends". What do you mean by "interconnect"? Do you need to coordinate number crunching (say, by handing out subtasks from some central server)?

I've work on one load-balanced Perl application server that shared heavyweight data via the database tier (Oracle, in this case), with lightweight "event" propagation via sockets between the servers. The lightweight part would correspond to "some sort of home brew binary/ASCII protocol" on your list. It worked fine for us.

But to answer "what are your thoughts on my problem?" we would need to know more about your problem. Can you characterize the nature of the number crunching? (E.g., is the crunching coordinated between servers? At what level of granularity?) The nature of storage? (E.g., are stored computations shared between servers, or is storage write-only?)

Replies are listed 'Best First'.
Re: Re: Clustered Perl Applications?
by sri (Vicar) on Jul 05, 2003 at 14:56 UTC
    The number crunching is coordinated by a central node.
    It is just handing out job ids, the number crunchers are then fetching the data from the right storage server (storage servers hold ranges of jobids), crunching the data and sending it to another storage server where the data is saved for later analysis.
    Storage servers are MySQL and always readwrite.

      So the number-crunchers are fetching/storing data (using SOAP) from storage servers, which are fetching/storing the data from MySQL servers - or am I misunderstanding?

      Why do you need the intermediaries? Why not have the number-crunchers fetch/store directly to the MySQL servers?

      (apologies if I'm being dim - rather late here.)

        Please see some posts below. ;)