Is that limitation documented somewhere? The perlfork docs seems to imply that they do can be separate.
Open handles to files, directories and network sockets
All open handles are dup()-ed in pseudo-processes, so that
closing any handles in one process does not affect the othe
+rs.
See below for some limitations.
Here's the caveat that follows.
Open filehandles
Any filehandles open at the time of the fork() will be dup(
+)-ed.
Thus, the files can be closed independently in the parent a
+nd
child, but beware that the dup()-ed handles will still shar
+e the
same seek pointer. Changing the seek position in the parent
+ will
change it in the child and vice-versa. One can avoid this b
+y
opening files that need distinct seek pointers separately i
+n the
child.
-xdg
Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.
| [reply] [d/l] [select] |
Kind of, yes, in the same document its written
In the eyes of the operating system, pseudo-processes created via the fork() emulation are simply threads in the same process. This means that any process-level limits imposed by the operating system apply to all pseudo-processes taken together. This includes any limits imposed by the operating system on the number of open file, directory and socket handles, limits on disk space usage, limits on memory size, limits on CPU utilization etc.
According to Win32 operating system, all threads share the same STDIN/STDOUT/STDERR,
so in turn, all perl pseudo-processes (win32 os threads) share the same STDIN/STDOUT/STDERR, even if the perl globs (*STDIN/*STDOUT/*STDERR) are dup()ed, the underlying win32 filehandles remain the same.
| MJD says "you can't just make shit up and expect the computer to know what you mean, retardo!" | | I run a Win32 PPM repository for perl 5.6.x and 5.8.x -- I take requests (README). | | ** The third rule of perl club is a statement of fact: pod is sexy. |
| [reply] |