in reply to Tabulating Data Across Multiple Large Files
I suspect you would improve performance by reading the data one line at a time from each file. Your strategy of collecting all the data at once runs the risk of driving the machine into swap, with the attendant time consumption of I/O with the swap file system.
You can produce an array of open file handles, one for each data source, and place the first line of each in another array. You may need to adjust the number of open file handles allowed to you for this. Then, keeping track of which index provided each line, sort by timestamp. Extract data from the soonest line, run it through its processing to update the statistics you want, and replace that line from the corresponding file handle. Sort again in some way (an insertion sort may be called for) and, as you say, lather, rinse and repeat.
You should a give more specific description of your data and the statistics you want. With that we could give better advice on the streaming you want. I see that you have done that, but I'm sorry to admit that I still don't understand how you need to parse times out of that.
After Compline,
Zaxo
|
|---|