There's the little known but core FileCache that will handle closing and reopening descriptors for you to keep you under the per-process handle limits.
And yes, some versions of Solaris had a pitifully low file descriptor limit if you were using the C library's stdio.h functions (or anything sitting on top of them rather than system calls such as read(2) and friends). I think basically someone declared the descriptor as a char rather than an int so even if you upped the ulimits you'd get bitten at 252-ish descriptors. That being said, I think this was aeons ago in the 2.4-2.6 era and was fixed maybe around 2.7-2.8. Solaris 9 didn't have the problem at all that I recall.
Update: clarified bit about what had problems.
In reply to Re: Writing to many (>1000) files at once
by Fletch
in thread Writing to many (>1000) files at once
by suaveant
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |