in reply to how to close all files

You can close STDIN, STDOUT, and STDERR by name. As for the rest, a lexical filehandle does not need to be a standalone scalar. An array entry or a hash entry is a scalar, too.

The sample code assumes you have a subdirectory called "files" in the current directory to keep from polluting your current directory. (I'm planning a Meditations post later about my method of organizing PM tests, examples, benchmarks, and sample data files for them. I'll update this with a ref to that when it's posted).

First, we have an array. Notice that the files get successfully closed then one receives warnings about the second print to each.

use warnings; use strict; my @foo; for ( 1..5 ) { open $foo[$_], '>', "files/file$_.txt" or die "Cannot write to file$ +_.txt: $!\n"; } for ( 1..5 ) { print { $foo[$_] } "$_\n"; } for ( 1..5 ) { close ( $foo[$_] ) or die "Cannot close file$_.txt: $!\n"; } for ( 1..5 ) { print { $foo[$_] } "$_\n"; }

Now we have a hash, which has multiple descriptive handle names grouped in one handy data structure. It closes then tries to print to the files just as the array example did.

use warnings; use strict; my %foo = map { $_ => undef } qw( employees rates departments ); for ( keys %foo ) { open $foo{$_}, '>', "files/$_.txt" or die "Cannot write to $_.txt: $ +!\n"; } for ( keys %foo ) { print { $foo{$_} } "$_\n"; } for ( keys %foo ) { close ( $foo{$_} ) or die "Cannot close $_.txt: $!\n"; } for ( keys %foo ) { print { $foo{$_} } "$_\n"; }

There's nothing special about filehandles after a fork. Closing the files through the filehandles in the parent does not close them in the child. Closing them in the child does not close them in the parent. Closing them in one child does not close them in other children. By keeping the files you open grouped in a convenient container data structure through which you can iterate, you can close them just as easily after a fork as in a single process.

use warnings; use strict; my %foo = map { $_ => undef } qw( forked_1 forked_2 forked_3 ); for ( keys %foo ) { open $foo{$_}, '>', "files/$_.txt" or die "Cannot write to $_.txt: $ +!\n"; } print STDERR "I am $$, and I am the parent.\n"; if ( my $kidpid = fork ) { my $deadkid = wait; print "$$ says: $deadkid is no more. Now I will write to the files +.\n"; for ( keys %foo ) { print { $foo{$_} } "$$: $_\n"; } } elsif ( defined $kidpid ) { print STDERR "I am $$.\n"; for ( keys %foo ) { close ( $foo{$_} ) or die "Cannot close $_.txt: $!\n"; } for ( keys %foo ) { print { $foo{$_} } "$$: $_\n"; } } else { print STDERR "Failed to fork!?!?!?: $!\n"; }

You may note that the parent waits until after the child has exited before it even attempts to write to the files. The child cannot write to them, because it closed its files through the handles. The parent has no problem, because the files are still open in the parent.

Grouping filehandles in an array or hash is nothing revolutionary. It can make your life (well, your programming task, anyway) a whole lot easier if you need to treat data as a group to keep it grouped. Filehandles aren't just simple string or numeric data, but they are data. It's not just for forked processes, either. Another example application for this technique is in fact a non-forking server process that will accept an unknown number of connections. Anything that needs to open all the files in a directory and keep them all open at once rather than opening and closing them serially is a good candidate, too. Any program which opens a varying number of files, pipes, or sockets for any reason can benefit.