skx has asked for the wisdom of the Perl Monks concerning the following question:

 I've written a fork()ing server which I'm using very happily.

 However I've run into a problem where I wish to persist some data between clients, as the forked() children cannot modify the variables stored in the parent I've fudged this by using FreezeThaw to persist the data to a temporary file.

 This works .. mostly .. but there are issues with synchronization which appear to cause the temporary file to get trashed.

 As I see it I can continue down this route and add locking - or I can solve the problem the right way by creating a shared memory segment in the parent, and writing a hash to that from the client.

 (Each client can write to %persistence{clientip} - as I know that each client connects only once).

 So my questions begin, how can I manage this? Is there some sample somewhere that I can copy?

 I've looked around and I honestly see no real examples of using IPC shared memory under perl.

Steve
---
steve.org.uk
  • Comment on Use of shared memory to persist data between child processes

Replies are listed 'Best First'.
Re: Use of shared memory to persist data between child processes
by Corion (Patriarch) on Oct 09, 2003 at 10:25 UTC

    My approach to shared memory is to avoid it as long as possible, as the locking issues and resource issues are a nightmare to me.

    I would either go via a database to which all slaves write and from which the master reads (if at all necessary), several, differently named files, one per slave, which the master collects as soon as they are completely written, renames and then reads, or via TCP connections that the slaves open to the master and transfer the data through.

    These mechanisms are most likely slower than shared memory, but once you've killed both the master process and the slaves, you can inspect the files or the database to see what went wrong, and you know that if you erase the files respectively the database table, your system is in a fresh state.

    perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The $d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider ($c = $d->accept())->get_request(); $c->send_response( new #in the HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web

       I'm afraid that I cannot rely upon a database being present as this is some very simple standalone code. (Which also runs under Windows).

       The basic code is something that forks to serve HTTP like requests and I wish to have a means of determining which requests are "active". As the files served are very large it's very likely that requests take a significant amount of time to process.

       I think that I could use a seperate file for each slave - so that when the child forms it will write to a file "/var/server/child-ip.$$" and the master parent can read these .

       That's a more basic idea than I'd considered but I see nothing wrong with it, and there's no synchronisation issues to deal with at all.

      Steve
      ---
      steve.org.uk

        I'm afraid that I cannot rely upon a database being present as this is some very simple standalone code.

        DBD::SQLite

        ----
        I wanted to explore how Perl's closures can be manipulated, and ended up creating an object system by accident.
        -- Schemer

        Note: All code is untested, unless otherwise stated

Re: Use of shared memory to persist data between child processes
by Abigail-II (Bishop) on Oct 09, 2003 at 10:58 UTC
    I've looked around and I honestly see no real examples of using IPC shared memory under perl.
    Where did you look? "man perlipc" has an example of 'give' and 'take' programs, who exchange data using shared memory, just like you want to do between the child and the parent. If you want to exchange hashes, you do need to serialize and deserialize it, but there are several techniques for that (YAML, Data::Dumper, Storable, FreezeThaw).

    Abigail

Re: Use of shared memory to persist data between child processes
by delirium (Chaplain) on Oct 09, 2003 at 11:23 UTC
    You could simplify this by using a module that already tackles this problem: Parallel::ForkManager.

    Here is a snippet of code that uses it to have child processes update a hash in the parent:

    #!/usr/bin/perl -w use Net::FTP; use Parallel::ForkManager; use strict; my %srvs = (); my $pm = new Parallel::ForkManager(10); $pm->run_on_start( sub { print STDERR "Connecting to $_[1], port $srvs +{$_[1]}{port}\n"; } ); $pm->run_on_finish ( sub { my (undef, $exit_code, $ident) = @_; if ( $exit_code == 0 ) { $srvs{$ident}{stat} = "Good logon to $ +ident\n"; } elsif ( $exit_code == 1 ) { $srvs{$ident}{stat} = "*** Logon to $i +dent failed\n"; } elsif ( $exit_code == 2 ) { $srvs{$ident}{stat} = "*** Connect to +$ident failed\n"; } else { $srvs{$ident}{stat} = " Script error while connecting to $i +dent\n"; } print STDERR $srvs{$ident}{stat}; } ); sub ftpcheck { my $id = shift; my $srv=$srvs{$id}; my $status = 1; my $ftp=Net::FTP->new($$srv{addr},Timeout=>15,Port=>$$srv{port}); exit(2) if ! $ftp; $status = 0 if $ftp->login($$srv{user},$$srv{pass}); # Change s +tatus to 'good' if logon works $ftp->quit(); exit ($status); } # do stuff to set up %srvs as hash of FTP servers and associated infor +mation here # when %srvs is setup, iterate through the keys like so: for my $child ( keys %srvs ) { my $pid = $pm->start($child) and next; &ftpcheck($child); $pm->finish($child); } $pm->wait_all_children; # Quick mail hack. I was in a hurry open MAILFILE, "| mail -s 'ftpcheck results' $notify"; print MAILFILE $srvs{$_}{stat} for sort keys %srvs; close MAILFILE;

    But the obvious limitation here is that a child has to exit before the parent can be updated.

Re: Use of shared memory to persist data between child processes
by perrin (Chancellor) on Oct 09, 2003 at 17:51 UTC
    There's no shortage of shared memory stuff for Perl. One of the most widely used options is IPC::Shareable. However, I'm not sure it supports Win32. A more portable option would be MLDBM::Sync, which definitely works on Win32 and just uses a dbm file for sharing.
Re: Use of shared memory to persist data between child processes
by fokat (Deacon) on Oct 12, 2003 at 02:46 UTC

    Keep in mind that if you're having lock()ing problems with files, you'll also have them with shared memory. Shared memory is expensive peroformancewise for the system.

    The shared in its name only refers to the fact that it is mapped to more than one process. However, you still need to be /very/ careful when accessing it concurrently.

    Having said this, take a look at IPC::ShareLite.

    Best regards

    -lem, but some call me fokat