tsk1979 has asked for the wisdom of the Perl Monks concerning the following question:
All well and good, its working fine. Now I have to solve another problem, i.e. automatic flagging of performance degradation or spurt in performance. so I will have an existing csv file in the same format. I will dump new csv file and flag cases where difference in numbers exceeds (2 and 20%) The old csv file may not have some new testcases, so the diff log so created should have the statistics of new testcases added, as well as flag which testcases changed beyond tolerance limits. I am trying this approach. Create a hash for each performance criteria(eg stage3) and inside a hash create a hash for each testcase. i.e. netsted hashes. That way this code is generic, I do not have to worry about new stages coming in the future! However my hashes are a little weak, and I am getting a little stumped. I have googled for code on nested hash within hash etc., but am still stumped, any tips will be appreciated!TestcaseName, Stage1Mem, Stage1Time, Stage2Mem, Stage2Time...... Test1,44,45,43,45..... Test2,7,2334,45,34.... . .
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: CSV file reading and comparison
by tsk1979 (Scribe) on Feb 27, 2008 at 10:39 UTC | |
by toolic (Bishop) on Feb 27, 2008 at 14:06 UTC | |
by tsk1979 (Scribe) on Mar 03, 2008 at 08:38 UTC | |
|
Re: CSV file reading and comparison
by goibhniu (Hermit) on Feb 28, 2008 at 17:14 UTC |