in reply to Fast reading and processing from a text file - Perl vs. FORTRAN
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Fast reading and processing from a text file - Perl vs. FORTRAN
by ozgurp (Beadle) on May 24, 2003 at 14:06 UTC | |
| [reply] [d/l] |
by Limbic~Region (Chancellor) on May 24, 2003 at 20:10 UTC | |
Unfortunately I am not a perl guru myself. I can only provide you with some hints. Typically, a better algorithm is what will make your code run faster. Sometimes you can trade memory for time by caching (see Memoize by Dominus). When you want to evaluate how a tweak has impacted performance - look into Benchmark. The thing to remember here is to go through many iterations to remove "flukes", vary your data as code behaves differently based off input, and try to test on a system at rest so it won't be influenced by other running programs. There is also Devel::DProf.
Let me point out a few things in your code that may or may not help you.
| [reply] [d/l] |
by runrig (Abbot) on May 24, 2003 at 20:43 UTC | |
Regexes tend to do alot better on fixed strings, and especially on strings which are anchored to the beginning. So what I might try is: Or you might try to combine your key strings: For one thing looking for SYM and then SYMCOM is redundant and a waste of time, unless you want a '\b' after the strings. You might try the study function before doing the above regexes, it may or may not help. Try using the Benchmark module to see what is best on your data. Update: And looking again, its probably the next section that needs the most help... | [reply] [d/l] [select] |