Greetings. I would describe myself as an intermediate Perl coder, although I am sure that is subject to much interpretation. Anyhow, my question is one of optimization.
The scenario: I have quite a few (up to around 100) large text files (400K-800K). Each contains time series data organized by row. The job is to find matching "types" of lines across the files and then calculate averages of their data columns and put them in an output file.
The process that I have in place now goes like this:This only works because I haven't been testing with very many of these text files (I've only been bringing about 6 into memory at a time - and even this is quite arduous).
What I would like to have is some sort of streaming file reader. That way I could stream through each of the 100 files in parallel, calculating averages as I go. This would all but eliminate the time it takes to drag these files into memory to be worked on, as well as reduce the memory footprint of the process.
The question is, how can I stream through many files in parallel when their structures aren't exactly parallel (some files have different time step sequences, or different "type" lines within a given time step)?
Also, as an aside, is the Benchmark module the standard tool to time code?
In reply to Tabulating Data Across Multiple Large Files by reds
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |