st_imbob has asked for the wisdom of the Perl Monks concerning the following question:

I am having trouble getting enough file handles, since it can sometimes use more than 256. Solaris' 32-bit stdio seems to be the source of the problem, even though Solaris 8 can support a bajillion per process.
host> ulimit -a time(seconds) unlimited file(blocks) unlimited data(kbytes) unlimited stack(kbytes) 8192 coredump(blocks) unlimited nofiles(descriptors) 3072 vmemory(kbytes) unlimited
The code below fails at 256, although ulimit would indicate that it shouldn't
#!/usr/bin/perl use FileHandle; for ($i=0;$i<2048;$i++){ $fd = new IO::File; sysopen($fd, "$i", O_WRONLY | O_CREAT) or die "stopped at: $i: + $!\n"; $fd{"$i"} = $fd; } for ($i=0;$i<2048;$i++){ $fh = $fd{"$i"}; print $fh "test\n"; }
How to proceed? We built perl 5.8 with    useperlio=define d_sfio=undef but that didn't work, presumably because perlio uses stdio. Any experience with sfio, or any indication that it will solve the problem? It seems that 64-bit perl would use the 64-bit stdio libraries that are not broken, but we need DBD::Oracle, and that seems to require 64-bit Oracle, and we are not quite there yet.

Replies are listed 'Best First'.
Re: Need lots of filehandles under Solaris
by Abigail-II (Bishop) on Sep 25, 2003 at 23:37 UTC
    This is mentioned in the README.solaris coming perl 5.8.1:
    =head1 RUNTIME ISSUES FOR PERL ON SOLARIS. =head2 Limits on Numbers of Open Files on Solaris. The stdio(3C) manpage notes that for LP32 applications, only 255 files may be opened using fopen(), and only file descriptors 0 through 255 can be used in a stream. Since perl calls open() and then fdopen(3C) with the resulting file descriptor, perl is limited to 255 simultaneous open files, even if sysopen() is used. If this proves to be an insurmountable problem, you can compile perl as a LP64 application, see L<Building an LP64 perl> for details. Note also that the default resource limit for open file descriptors on Solaris is 255, so you will have to modify your ulimit or rctl (Solaris 9 onwards) appropriately.

    Abigail

Re: Need lots of filehandles under Solaris
by Fletch (Bishop) on Sep 26, 2003 at 01:33 UTC

    If you don't require the descriptors simultaneously you might could get away with using the standard FileCache module.

      Thanks for the tip! FileCache is just the ticket. I was thinking of having to write it myself. You ARE a saint. Thanks also for not wondering out loud why I would need 256 file handles.
      FileCache Warning!
      In Perl 5.6, the author's cacheout_close() is broken and will cause errors if you try and re-open that file. It will think that the file is still open and cause "write to closed file" errors. This has been noted by Anthony Thyssen, who suggests an alternate close here
      This has been fixed in Perl 5.8.
Re: Need lots of filehandles under Solaris
by graff (Chancellor) on Sep 26, 2003 at 02:09 UTC
    I'm sure you have reasons for wanting more than 255 file handles open at once, but in a situation like this, I'd be more inclined to revisit and rework the app design. There's bound to be a way to divide the task into partitions or stages, so that you make the best possible use of limited available resources.

    For example, if it's a job like sorting some massive, monolithic input stream into a thousand discrete output bins, do a first-pass sort that divides it into just, say, 40 bins, such that each of these could then be easily subdivided into 25 sets by a second-pass over those output files.