in reply to Re: Refactoring a large script
in thread Refactoring a large script
I guess what I'm looking for is more efficiency than refactoring. In that sense, I've already pulled much of the redundant code into subs. So, it's more the overhead and speed which I want to tackle. And thanks for the book ref. I'll give it a look-see.
Thanks,
Matt
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Refactoring a large script
by xdg (Monsignor) on Jan 18, 2007 at 18:28 UTC | |
Ah. Then here are some more potentially useful links for you: -xdg Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk. | [reply] |
by mdunnbass (Monk) on Jan 19, 2007 at 18:27 UTC | |
I finished debugging the vast majority of the script at this point, and now I finally have it at a point where it will run to completion without getting stuck in a loop, or encountering an unrecoverable error of some kind. So, I just ran -d:Dprof, and I am pleased with the output: Obviously, a 20 minute run-time is far longer than I would have liked. But 96% of that run-time was in 2 subs. The SEARCHFASTA sub searches a file containing (in this run) a .5 gig file of DNA sequences, searching for every occurance of multiple different strings, and saving them to a large hash. So, in that case, it's going to be dependant on the size of the file being searched, and will be large no matter what I do. The WEED sub, which is taking 65% of the time, is the one I have to work on then. Its purpose is to organize the results of the SEARCHFASTA sub based on certain criteria, and that is taking it forever. So, I'll have to see what Ican do to tighten that up. I may post that to the Seeking Wisdom node in a bit. Thanks to everyone for their help and advice. I am taking much of it to heart, and will be keeping a lot of your tips in mind from now on.
Thanks | [reply] [d/l] |
by BrowserUk (Patriarch) on Jan 20, 2007 at 04:30 UTC | |
Here's my attempt at refactoring your WEED() sub. As I don't have data, this has been done blind. Ie. Without the tests you would normally use to ensure that it continues to function correctly and that changes aimed at improving performance actually achieve that goal. So, I'll step through the changes I've made, so that you can apply and test them as you go. It should also give you some ideas for refactoring the other subs that need it. The first thing I did was to do some trivial reformatting of the code to make the indentation consistent, add some horizontal whitespace and reduce the vertical whitespace. These changes allow me to get a clearer overview of the structure of the sub and (I find) make it easier to read. Along the way, I've also remove the comments as I find them a distraction. Obviously, you don't need to make these changes, One anomaly that jumped out at me as I manually reformatted the code I got from your scratchpad was this line:
As there is no other reference to a variable named $matchesfasta_id in the file, and there is a line a couple above this line:
I've assumed that this is a typo/transcription error and should read:
I've posted my starting point after the reformatting below: The first things I did were simple clean ups of various things in the code. Starting from the top:
Now I started looking at the function of the code and the first thing I noticed is that you sort the keys of %matches in the for loop at A below, and again each time around the nested inner loop at B.
Since you are not modifying %matches between these two points (or at all), then it means that you are sorting and re-sorting these keys many times. It's not easy to tell in the absence of test data how big this hash is, or how many time these loops iterate, but given the sizes of the data files mentioned in the OP, and the timing information you've supplied above, it appears that they may be substantial. I could well see this needless re-sorting being the source of a substantial amount of the runtime of the program. Eliminating this duplication is easy and should produce a substantial performance benefit. Replacing the above lines with these should do the trick:
The next thing I looked at was the section of code where you 'band pass' filter the matches into a temporary array. You then conditionally copy that temporary array into your results array, or discard the entire super set if nothing made it through the filter:
Whilst there is nothing inherently wrong with this, Perl has a general purpose array filtering mechanism, namely grep specifically for this purpose and it is usually considerably quicker than an explicit loop:
There is a possible caveat with this change! Your explicit loop does allow you to short circuit the loop at the high pass limit which isn't possible with grep. However, as shown above, using grep does allow you to avoid the creation of and later copying of the temporary array. This combined with greps inherent better performance may outweigh the benefit of that short circuiting. Or it may not. The only way to tell will be to make the change and benchmark. If the size of the array being filtered is sufficiently large, and the pass band sufficiently narrow and low down in the range that the short circuit is beneficial, then it would be better to use a binary search algorithm to discover the low and high range limits and then copy the values across to the results set using an array slice. I haven't coded that as I have no way to make the test. Finally (for now), you use this construct:
To empty arrays in several places. Whilst there is nothing wrong with this, I think it is clearer, and may be slightly quicker to use
and I've made those changes also. The final code is the second of the two code blocks below. There are various other things that look suspect.
| [reply] [d/l] [select] |
by mdunnbass (Monk) on Jan 21, 2007 at 04:12 UTC | |
by mdunnbass (Monk) on Jan 23, 2007 at 15:50 UTC | |
by BrowserUk (Patriarch) on Jan 23, 2007 at 15:57 UTC | |
| |