in reply to Re: Writing to many (>1000) files at once
in thread Writing to many (>1000) files at once

I agree, if you can do it in memory, then do it in memory. Also the Standard Perl i/o mechanisms are incredibly slow. I would recommend File::Slurp (supports slurping and spewing). I used it once in a programm that had to modify roughly 1000 files. Compared to standard slurping like:
{ local $/ = undef; my $wholefile = <$FH>; }
it was 15 times faster (it now takes roughly 3 minutes, while with standard perl mechanisms like the one abough, it took 45 minutes). It might even be that both (Input and Ouput Data) fit into main memory - if they do and if such memory consumption isn't a problem, then simply do it - its much faster.