in reply to how to close all files

Unfortunately, this is a non-trivial problem. The lsof (list open files) utility may help however (if you're on a platform that has it...).  Or the Perl wrapper Unix::Lsof, written by a fellow monk.

Update: just a few more words on where this can get tricky.

Consider the not so untypical scenario of getting a long running forked process to dissociate cleanly in some webserver context, such that the (grand-)parent process may continue normally and serve subsequent requests. For this, it's necessary to close any file handles in the child that the grandparent would otherwise wait for to be closed (and thus block/hang). Now, with a regular CGI program this is still rather simple (by default, closing stdout/stderr is sufficient), but if you, let's say, have a FastCGI environment with persistent DBI/DBD connections to an Oracle database, things can get somewhat more involved. In particular because the respective modules and libraries under the hood may have opened files and sockets on their own, which are not under your immediate control. Also, as practice shows, brute force methods closing all handles may indirectly render the parent process non-functional...

That said, as long as your code has full control over which files are being opened, I agree it's usually best to just keep track of what needs to be closed (as has already been suggested elsewhere in the thread).