Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:
Hi monks
My concern is, the file which have lakhs of records each field separated by semicolon. So i need to parse each record and separate the fields and do the some calculation, based on the satisfy condition need to save result into the different files.
And also here i need to do some of fields in different records which satisfy the some condition to the aggregation on those fields, for this i am making hash at end of the file do the aggregation and write into the file.
So on this process for 10 lakhs records, taking time of 3 hours. So i need to do optimize it. So, here not getting idea either reading the line by line of tera byte file taking long time or saving content into memory(hash) at end put into file takes time?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: optimization in file processing
by jethro (Monsignor) on Jul 07, 2011 at 12:50 UTC | |
by moritz (Cardinal) on Jul 08, 2011 at 10:18 UTC | |
by jethro (Monsignor) on Jul 08, 2011 at 11:57 UTC | |
|
Re: optimization in file processing
by moritz (Cardinal) on Jul 07, 2011 at 12:08 UTC | |
|
Re: optimization in file processing
by BrowserUk (Patriarch) on Jul 08, 2011 at 12:31 UTC |