Anonymous Monk's advice is probably the best solution, but if you're looking to speed up a perl solution, it might be faster to just slurp each file instead of reading line by line. You can do this by setting $/ to undef, or by using File::Slurp.
That being said, wouldn't it make more sense to actually merge the files based on the timestamps of each log entry?
In reply to Re: How to merge Huge log files (each 10 MB) into a single file
by lostjimmy
in thread How to merge Huge log files (each 10 MB) into a single file
by lnin
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |