Beefy Boxes and Bandwidth Generously Provided by pair Networks
Clear questions and runnable code
get the best and fastest answer
 
PerlMonks  

Efficient IPC

by Jeppe (Monk)
on Jun 24, 2004 at 09:06 UTC ( [id://369269]=perlquestion: print w/replies, xml ) Need Help??

Jeppe has asked for the wisdom of the Perl Monks concerning the following question:

Hello fellow perl followers!

I have worked on this problem for a while and tried a few modules and solutions:

I need to have one process write to several message queues and then have one reader for each message queue - and it needs to be efficient and scalable.

Basically, the writer process inserts credit card transactions into the database and then forwards the row id to several message queues - and then each reader reads from one of those and processes those transactions further. Some of the readers are a bit slower than the writer, so when we process files there will be a bit of a buildup in the message queue every hour - which is why the solution must be scalable.

What I've tried so far is IPC::Shareable - that was very slow, IPC::Sharelite + Storable - it doesn't scale well enough and a self-made database solution - I maintained a table where I would write a name and a transaction id, and then the readers would get and delete rows from that table.

The self-made database solution actually worked the best - but there are bugs in the DBD::DB2 driver that are triggered from the code - and believe me, I've tried to get around the problem!

So - now I'm looking for a new solution, one that is preferably free. I'm considering using sockets, but I understand I will encounter problems because of the asynchronous nature of my processing? How about using berkley data store - would that work?

It would make me happy if it is possible to make the solution reboot-safe. Also - I'm currently on perl 5.6.1 and would prefer not to move - because of this, anyhow.

Replies are listed 'Best First'.
Re: Efficient IPC
by Abigail-II (Bishop) on Jun 24, 2004 at 09:40 UTC
    I'd also go for a database solution. If DBD::DB2 contains (unfixable?) bugs, you might want to go for another database driver. I'm not MySQLs biggest fan, but for this task, synchronization, it looks like it can provide an adequate solution. Of course, the ideal solution would be if you can fix the bugs in DBD::DB2. ;-)

    Of course, many other solutions are possible, either written from scratch, or piggybacking on something else (email for instance).

    Abigail

      Well - my customer demands DB2 and pay us a good deal of money, so DB2 it is. But I just realized that I might want to use a separate dbh for my db-based solution. Hmm. That might work.

      Funny how just asking a question sometimes makes you come up with a workable solution!

Re: Efficient IPC
by borisz (Canon) on Jun 24, 2004 at 09:33 UTC
    I have done a similar thing with Postgres DBD::Pg to serialize my data with lots of clients. That worked well.
    Boris
Re: Efficient IPC
by perrin (Chancellor) on Jun 24, 2004 at 13:59 UTC
    BerkeleyDB is very scalable and much faster than IPC::Shareable. MySQL or PostgreSQL would be similar. You could also look at Spread and Spread::Queue if you want support on multiple machines.
•Re: Efficient IPC
by merlyn (Sage) on Jun 24, 2004 at 15:37 UTC
      That looks very promising! Thanks a lot, I'll make sure to post an update when I've tested it.
        It turns out that DBM::Deep is too slow for my usage - I have to turn on autoflush and locking in order to use it for IPC, and that kills the speed.

        So, I'm looking into Spread instead..

Re: Efficient IPC
by McMahon (Chaplain) on Jun 24, 2004 at 15:57 UTC
    OK, this is unbelievably low-tech, but I've seen it done successfully. It's fast, reboot-safe, testable, and configurable:

    Build a file-based queue system.

    OK, stop laughing now. I'm serious.

    For each writer transaction, have the writer put records for the readers into a file ("or die", of course); or have a single file for each reader; whatever's most convenient for your application. The readers can retrieve their own records at their leisure.

    Furthermore, it takes very little work to configure the writer on the fly-- have it check a config file every few seconds, and you can add and remove readers at will.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: perlquestion [id://369269]
Approved by Corion
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others about the Monastery: (5)
As of 2024-03-28 13:35 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found