Yowch! You're probably hitting an OS limit on the number of file handles you can have open. If using one of the methods of reading the file into a hash doesn't run out of RAM, you'll want to use one of those. Otherwise, you'll have to modify the get_file_handle function to close some of its file handles when it's about to run out. Just as a quick off-the cuff thing, it might1 be good enough to simply close *all* the file handles when you reach some predetermined limit. Something like (untested):
my $Max_FH=1000; # Maximum # file handles you want my %FHList; # Holds file handles we've opened so far sub get_file_handle { my $key = shift; if (!exists $FHList{$key}) { if ($Max_FH <= keys %FHList) { close $FHList{$_} for keys %FHList; %FHList=(); } open $FHList{$key}, '>', "output_$key.txt" or die $!; } return $FHList{$key}; }
1 Some workloads have a handful of commonly-used tags, and a mess of onesie/twosies. If that's the case, this will occasionally close and reopen commonly-used tags, but it'll clear out all the lesser-used ones. If the commonly-used values are common enough, the opens and closes will amortize to a small amount of overhead. If your workload has an evenly-distributed set of keys, then you'll need to make get_file_handle much smarter...
...roboticusIn reply to Re^3: segregating data from one file to many files
by roboticus
in thread segregating data from one file to many files
by patric
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |