You should first find out which part is slow.
You are opening and closing the output file for every line, instead of opening it once and then writing all your data to it. This is usually very slow.
If the reading of all input files is slow, then you can't get any faster.
If the searching/merge sort is slow, there are some things you can do to speed it up. For example, you are repeatedly calling hex2dec, and maybe Memoize'ing that function speeds up things. But while we are optimizing, are you sure that you need to convert the timestamps to numbers before you can compare them? "0xAB00" gt "0x1234" is true. So maybe you can strip out the complete conversion from timestamps.
Also, what you will be implementing is the output phase of any merge sort, so look at example implementations of those for reference. I would for example remove all files from @filenames and all arrayrefs from %all_files that are already empty instead of adding a placeholder entry that needs to be re-checked on every loop.
Also, instead of accumulating the output and then writing it, it might be faster if you write the output as you have it at hand, as that could give the operating system some time to write the data to disk before you hand it new data.
In reply to Re: Write large array to file, very slow
by Corion
in thread Write large array to file, very slow
by junebob
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |