in reply to Fast queue logic

You need to flesh out your requirements a bit. You list 100-200 'files/sec', bur I don't see where those files come from. Are they requested by name, or is their content part of the request? Are these all new connections, or just a few busy clients? What protocol is this?

That is a pretty rapid request rate. You will need to handle several requests per timeslice on linux, and that is only considering CPU usage. Typically, I/O rate is the limiting factor in client/server applications.

How much hardware can you throw at this? How much do you have now? Your data rate can be met by dedicated apache servers, is that what you want?

After Compline,
Zaxo

Replies are listed 'Best First'.
Re: Re: Fast queue logic
by Marcello (Hermit) on Dec 06, 2002 at 11:10 UTC
    Time for some more explanation:

    Clients connect using TCP. The request from clients are now stored in files for processing, that is why I said 100-200 files/sec. This is currently the fasted rate at which request can be stored. This I would like to improve by using a database with INSERTS, which is considerably faster. But to process these queued requests (either in files or database), the filesystem is much faster again. So I need to find the best of both worlds, actually.

    If you need more information, I will be happy to give you it.

    Regards,
    Marcel
      Maybe I'm missing something too, but how are these "requests" arriving and what subsequenct processing happens, ie how are the results returned, etc?

      Somthing like SOAP::Lite connected to HTTP::Daemon or Apache with some kind of tied hash would seem to be the obvious choice, but without a full description of the entire processing cycle it's kind of difficult to say "here's the definitive answer...".

      rdfield

        Hi,

        It's really simple and not important for the question, clients send a request using a fixed protocol:
        SEND:COMMAND,PARAMS
        The (Perl) server returns an OK, and processes the request ASAP. Therefore, I want to accept commands as fast as possible (store them as files or in a database), and then process them (read the files or from database).

        Using files the bottleneck is storing the commands in separate files, using a database the bottleneck is processing the commands (SELECTs).

        Regards,
        Marcel