in reply to Use of shared memory to persist data between child processes
My approach to shared memory is to avoid it as long as possible, as the locking issues and resource issues are a nightmare to me.
I would either go via a database to which all slaves write and from which the master reads (if at all necessary), several, differently named files, one per slave, which the master collects as soon as they are completely written, renames and then reads, or via TCP connections that the slaves open to the master and transfer the data through.
These mechanisms are most likely slower than shared memory, but once you've killed both the master process and the slaves, you can inspect the files or the database to see what went wrong, and you know that if you erase the files respectively the database table, your system is in a fresh state.
perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The $d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider ($c = $d->accept())->get_request(); $c->send_response( new #in the HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Use of shared memory to persist data between child processes
by skx (Parson) on Oct 09, 2003 at 10:34 UTC | |
by hardburn (Abbot) on Oct 09, 2003 at 13:58 UTC |