in reply to FORRRRRK again...

Hi,

thanks for your answers. I immediatedly dug into IPC::ShareLite
and tried it and it works. However - itīs not what I need the
children just need to do compute intensive work - and after that
there is no need for the children anymore. So perhaps what I need
is more of a client - server communication, where the clients
(child-processes) are doing the computations, then sending about
1kB of data to the server(parent-process) and terminate themself
the parent process stores this data in a hash.

There is really no need for really "sharing" this information
just distributing work and then collecting the results.
I would like to stay at the fork-based approach, because threads
seem instable now, and because with Mosix (www.mosix.org) it
will be possible to even distribute this task on a beowulf
cluster - i hope.

Ciao

Replies are listed 'Best First'.
Re: Re: FORRRRRK again...
by Zapawork (Scribe) on Jun 28, 2001 at 20:06 UTC
    Hi there fatvamp,

    It sounds like what you really need to do then is just set up a bidirectional pipe, fork, test to see if the process is the child or the parent, then close the side of the pipe that will not be used (writing for parent, reading for child). This is very textbook... so much so i stole it out of network programming with perl.

    Sample code follows:

    use strict; pipe (parent, child) or die "Can't open pipe: $!\n"); if (fork == 0) { #Test to see if parent or child close parent; select child; $| =1; #do your stuff here output should be written to parent exit 0; } #If we get here we are the parent process close child; print while <parent>; #this is to show the output coming from the chil +d you should then store this data to your prefernce. exit 0;

    If the above does not allow smp functionality then use the parallel module again and then open a bidirectional pipe to the main process.

    Dave -- Saving the world one node at a time

Re: Re: FORRRRRK again...
by Eradicatore (Monk) on Jun 28, 2001 at 18:37 UTC
    Just a quick question on that. Why do you need the child processes to die after they report? I wonder if it may be easier to fork off X children that handle input from the parents in a more generic way, do the work, report back the findings, and stay alive waiting for the next job to do. If you were sure you wanted the child to die then sometime in the future, you could obviously include a kill command to send it.

    Just a thought. May or may not be any easier or cleaner. Just something I now think about with child processes now that I *have* to do it to make forked processes work with Perl/Tk. (see this node Re: IPC, trying for have child wait for commands)

    Justin Eltoft

    "If at all god's gaze upon us falls, its with a mischievous grin, look at him" -- Dave Matthews