in reply to How to merge Huge log files (each 10 MB) into a single file
Anonymous Monk's advice is probably the best solution, but if you're looking to speed up a perl solution, it might be faster to just slurp each file instead of reading line by line. You can do this by setting $/ to undef, or by using File::Slurp.
That being said, wouldn't it make more sense to actually merge the files based on the timestamps of each log entry?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: How to merge Huge log files (each 10 MB) into a single file
by lnin (Initiate) on Sep 03, 2009 at 14:23 UTC | |
by ikegami (Patriarch) on Sep 03, 2009 at 14:26 UTC | |
by SuicideJunkie (Vicar) on Sep 03, 2009 at 14:48 UTC |