in reply to Re: best way to fast write to large number of files
in thread best way to fast write to large number of files

Dear Lauren_R

Thanks for your replay, Its has very interesting ideas (I read your replay more than 5 times ^_^). And yes,that is what you suggested was true.

what's in my mind now (go with first idea) is to load the logs to DB (we will use mySQL - thanks sundialsvc4 for the idea about using DB), and then we will run some groupby query then write the result to the specific files, this will reduce the number of opening and closing.

I think with the right schedule we can handle all the files without any delay.

and we will try the second idea too, because it has also good approach to resolve the issue.

we will compare the two idea and off-curse chose the best ;)

I will update shortly.

BR

Hosen

  • Comment on Re^2: best way to fast write to large number of files

Replies are listed 'Best First'.
Re^3: best way to fast write to large number of files
by Laurent_R (Canon) on Jun 24, 2014 at 06:19 UTC
    Hi Hosen, I suspect that the second solution will be significantly faster because your overall process (write once, read once each record) only marginally builds on the advantages of a data base, while appending data to 100 files is very fast. But I'll be very interested to read your update on this.