in reply to Re: Re: Re: Re: Clustered Perl Applications?
in thread Clustered Perl Applications?

Don't worry about the acronym "REST" and just think about calling URIs on remote machines with LWP (or maybe HTTP::GHTTP for speed). You pass in some data, which could just be a big chunk of Storable if you don't want the overhead of XML, and get back some data, which again could be done with Storable. You implement the remote calls by writing mod_perl handlers (or whatever you like that handles HTTP requests and is fast).

However, I don't really understand why you're passing around lots of data in the first place. I would implement this sort of thing by having all data in/out go through MySQL tables, and just use these remote calls to trigger operations, not to pass data.

  • Comment on Re: Re: Re: Re: Re: Clustered Perl Applications?

Replies are listed 'Best First'.
Re: Re: Re: Re: Re: Re: Clustered Perl Applications?
by sri (Vicar) on Jul 05, 2003 at 23:02 UTC
    I'm passing around lots of data because I have a few stages of processing, and I'm processing a few terabyte on small machines.

    I have to send chunks of structured data of about 10kb - 1mb.

    I'm beginning to really like the easy lightweight idea of REST.
    Even if i have to use POST.
      I still think you could simply fetch the data in chunks from MySQL and store the result there, avoiding the need to pass it around in your control protocol.

      You will have to use POST to pass any significant amount of data. That shouldn't be a problem. The HTTP modules handle POST just fine.

        I would really like to use MySQL directly, but most of the data can be condensed to a tree before sending. And at 1mb of data it seems to be a good idea to compress it that way.

        Example:

        The table:
        testid1, yada1, foo1
        testid1, yada1, foo2
        testid1, yada2, foo3
        testid1, yada1, foo4
        testid1, yada2, foo4

        The structure:
        $data{'testid1'}{'yada1'}{'foo1'}