No, opening a file handle in a child process doesn't cause the same handle to magically become open in the parent process. Once the child exists, the two stop impacting each other except in a very few, specific ways. [Unless, of course, you are using Windows native Perl where fork() isn't really fork.]
| [reply] |
#!/usr/bin/env perl
use strict;
use warnings;
use autodie;
use POSIX qw{mkfifo};
use Time::HiRes qw{usleep};
my $named_pipe = "pm_1085515_$$.fifo";
unlink $named_pipe if -e $named_pipe;
if (fork) {
my $timeout = 2;
local $SIG{ALRM} = sub { die "Timed out after $timeout seconds" };
alarm $timeout;
usleep 1e3 until -p $named_pipe;
alarm 0;
open my $writer, '>', $named_pipe;
print $writer "$_\n" for qw{B D C A};
close $writer;
wait;
unlink $named_pipe;
}
else {
open my $sorter, '|-', 'sort';
mkfifo $named_pipe, 0600;
open my $reader, '<', $named_pipe;
print $sorter $_ while <$reader>;
close $reader;
close $sorter;
}
Input:
B
D
C
A
Output:
A
B
C
D
| [reply] [d/l] [select] |
Named pipe works! MANY thanks Ken.
I googled but didn't see an answer to this: if you don't delete the named pipe after all is done, will it contain all the data that was passed to it? I tried to find out by running more, less, and cat on the named pipe, but they don't seem to work (probably by design). Also, is there a way to find out the actual size of the named pipe? (I guess it always reports 0).
| [reply] |
In my script, deletion of the named pipe occurred in two places:
- unlink $named_pipe if -e $named_pipe;
-
This was partly to start with a clean slate and also a bit of defensive programming.
Note that it only checks for filename existence (-e); not specifically for a named pipe (which would have been -p).
I'd recommend you implement a file naming convention for your named pipes that both identifes them as named pipes and also identifies their source (in my script that was: pm for this site, your OP's node ID, parent PID and .fifo extension).
Depending on the level of robustness required and in-house standards/policies/etc., you may want to do more than just leaving error handling to autodie (e.g. custom messages, timestamps, logging, and so on).
- unlink $named_pipe;
-
This is just for housekeeping purposes.
Without this, you could accumulate as many old named pipes as there are potential PIDs of the parent (that's tens of thousands on my system).
The named pipe doesn't actually hold the data (at least, not in any implementation that I'm aware of).
The data is written to, and read from, a buffer.
You can close the $writer filehandle, open another one, and write more data to the buffer.
Similarly, you can close the $reader filehandle, open another one, and read more data from the buffer.
If you close both the $reader and $writer, the data in the buffer is lost (this has nothing to do with whether the named pipe was deleted); if the named pipe wasn't deleted, and (after closing both the $reader and $writer) you opened a new filehandle, it would start with an empty buffer.
[You can have multiple readers and writers accessing the same named buffer at the same time.
Multiple writers may be useful (cf. parent and child processes both writing to STDOUT at the same time);
multiple readers is probably a very poor choice (again, cf. parent and child processes both trying to read from STDIN at the same time).
If you do have multiple filehandles, all need to be closed for the "data in the buffer is lost" scenario (in the last paragraph) to occur.]
Yes, the size of named pipes is reported as zero, e.g.
prw------- 1 ken staff 0 10 May 09:45 pm_1085515_64171.fifo
Finally, do note that the script I provided was only intended to show a "technique".
There may well be lots of things you'll want to tweak or substantially modifiy.
| [reply] [d/l] [select] |
| [reply] |
Filehandles are not meant to be shared between processes. The underlying file descriptor itself may be shared, but this does not accomplish much if your idea is to arrange for inter-process communication.
For IPC, pipes or socketpairs are typically used.
| [reply] |
The underlying file descriptor itself may be shared
But not even for file descriptors does the sharing extend to the point that opening (or even closing) the FD in one process will cause the same effect on a copy of it in some related process.
The most interesting part of the sharing (to me) is that seek() offsets are shared.
$ cat kidseek
#!/usr/bin/perl -w
use strict;
$| = 1;
print "One\nTwo\nThree\n";
if( ! fork() ) {
seek( STDOUT, 4, 0 )
or die "Can't seek STDOUT in child: $!\n";
print "Kid";
seek( STDOUT, 0, 0 )
or die "Can't again seek STDOUT in child: $!\n";
exit;
}
wait();
print "Dad";
$ ./kidseek >kidseek.txt
$ cat kidseek.txt
Dad
Kid
Three
$
| [reply] [d/l] |
italdesign:
Hint: Try putting a sleep 10; after the print "\n" in the child section.
...roboticus
When your only tool is a hammer, all problems look like your thumb.
| [reply] [d/l] |