in reply to Re^2: Refactoring a large script
in thread Refactoring a large script
Ah. Then here are some more potentially useful links for you:
-xdg
Code written by xdg and posted on PerlMonks is public domain. It is provided as is with no warranties, express or implied, of any kind. Posted code may not have been tested. Use of posted code is at your own risk.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Refactoring a large script
by mdunnbass (Monk) on Jan 19, 2007 at 18:27 UTC | |
I finished debugging the vast majority of the script at this point, and now I finally have it at a point where it will run to completion without getting stuck in a loop, or encountering an unrecoverable error of some kind. So, I just ran -d:Dprof, and I am pleased with the output: Obviously, a 20 minute run-time is far longer than I would have liked. But 96% of that run-time was in 2 subs. The SEARCHFASTA sub searches a file containing (in this run) a .5 gig file of DNA sequences, searching for every occurance of multiple different strings, and saving them to a large hash. So, in that case, it's going to be dependant on the size of the file being searched, and will be large no matter what I do. The WEED sub, which is taking 65% of the time, is the one I have to work on then. Its purpose is to organize the results of the SEARCHFASTA sub based on certain criteria, and that is taking it forever. So, I'll have to see what Ican do to tighten that up. I may post that to the Seeking Wisdom node in a bit. Thanks to everyone for their help and advice. I am taking much of it to heart, and will be keeping a lot of your tips in mind from now on.
Thanks | [reply] [d/l] |
by BrowserUk (Patriarch) on Jan 20, 2007 at 04:30 UTC | |
Here's my attempt at refactoring your WEED() sub. As I don't have data, this has been done blind. Ie. Without the tests you would normally use to ensure that it continues to function correctly and that changes aimed at improving performance actually achieve that goal. So, I'll step through the changes I've made, so that you can apply and test them as you go. It should also give you some ideas for refactoring the other subs that need it. The first thing I did was to do some trivial reformatting of the code to make the indentation consistent, add some horizontal whitespace and reduce the vertical whitespace. These changes allow me to get a clearer overview of the structure of the sub and (I find) make it easier to read. Along the way, I've also remove the comments as I find them a distraction. Obviously, you don't need to make these changes, One anomaly that jumped out at me as I manually reformatted the code I got from your scratchpad was this line:
As there is no other reference to a variable named $matchesfasta_id in the file, and there is a line a couple above this line:
I've assumed that this is a typo/transcription error and should read:
I've posted my starting point after the reformatting below: The first things I did were simple clean ups of various things in the code. Starting from the top:
Now I started looking at the function of the code and the first thing I noticed is that you sort the keys of %matches in the for loop at A below, and again each time around the nested inner loop at B.
Since you are not modifying %matches between these two points (or at all), then it means that you are sorting and re-sorting these keys many times. It's not easy to tell in the absence of test data how big this hash is, or how many time these loops iterate, but given the sizes of the data files mentioned in the OP, and the timing information you've supplied above, it appears that they may be substantial. I could well see this needless re-sorting being the source of a substantial amount of the runtime of the program. Eliminating this duplication is easy and should produce a substantial performance benefit. Replacing the above lines with these should do the trick:
The next thing I looked at was the section of code where you 'band pass' filter the matches into a temporary array. You then conditionally copy that temporary array into your results array, or discard the entire super set if nothing made it through the filter:
Whilst there is nothing inherently wrong with this, Perl has a general purpose array filtering mechanism, namely grep specifically for this purpose and it is usually considerably quicker than an explicit loop:
There is a possible caveat with this change! Your explicit loop does allow you to short circuit the loop at the high pass limit which isn't possible with grep. However, as shown above, using grep does allow you to avoid the creation of and later copying of the temporary array. This combined with greps inherent better performance may outweigh the benefit of that short circuiting. Or it may not. The only way to tell will be to make the change and benchmark. If the size of the array being filtered is sufficiently large, and the pass band sufficiently narrow and low down in the range that the short circuit is beneficial, then it would be better to use a binary search algorithm to discover the low and high range limits and then copy the values across to the results set using an array slice. I haven't coded that as I have no way to make the test. Finally (for now), you use this construct:
To empty arrays in several places. Whilst there is nothing wrong with this, I think it is clearer, and may be slightly quicker to use
and I've made those changes also. The final code is the second of the two code blocks below. There are various other things that look suspect.
| [reply] [d/l] [select] |
by mdunnbass (Monk) on Jan 21, 2007 at 04:12 UTC | |
Re: $matchesfasta vs. $matches{$fasta} That was a copy and paste problem on my part. Along with the 'enigmatic' $h. I was originally using $h as the variable that became $fasta_id. But I replaced it because I realized no one else would have a clue what it was for. So, that's 2 mysteries resolved. As for the sorting issue you brought up next - I had no idea that sort was so memory intensive. Thanks for clearing that up for me.
Yeah, these hashes get big. The input into this sub is a HoAoA. In a typical run of the program, that HoAoA has about 22,000 key/value pairs, the upper-level Array would have only around 3-5 elements, but the lower level arrays would have several hundred elements. So, safe to say the hash is rather large.
Thanks for the grep idea! I hadn't even thoguht of that. As for the caveat of it's possibly not exceeding the early loop exit, I'll try them both and benchmark it, as you suggested. That said, I think many of your other changes may help enough that it won't make a be necessary, but we'll see.
Actually, I was deliberately trying to avoid the undef specifically. The reason being, I still am slightly confused about how to remove array elements that are undef. when I was originally using the undef line you suggested, I would get output through Data Dumper that some arrays consisted only of undef, significant amounts of others would have undef as the last element, etc. So, the roundabout way I settled on was the best I could jerry-rig with the minimal understanding of exists/undef at the time. If you (or anyone else) can think of a effecient way to excise any undef elements from an array, and additionally remove array elements that consist of an array containing only a single element (undef) as well, I would love it. Finally, the global variables that looked suspect. I mentioned $h earlier. $num is a user-defined number that winds up being the number of elements in the upper- level arrays in the HoAoA. In the output of the sub, and HoAoHoA, it is the number of key/value pairs in the subhash. The @fastarray array is essentially identical to the @SortedKeys array you created for me. The reason I kept each of those 2 variables as global was that essentiall each of my 30ish subs use them as-is, so I felt it would be better to just keep them global than pass and dereference them in every sub. Now, considering that @fastarray starts off with around 22,000 elements and eventually only gets reduced to about 2,000 elements, I don't know which would be faster, passing it constantly, or keeping it global. Thoguhts? Anyway, this response is getting rambling, as I wrote it in chunks over the course of a day, so I'll wrap it up here. Just let me say again, Thank you so much for your help! Matt P.S. - I am replacing the huge block of code on my scratchpad with a heavily commented version of this particular sub, though without the changes you have mentioned here. I'll be trying those out tomorrow. | [reply] [d/l] [select] |
by mdunnbass (Monk) on Jan 23, 2007 at 15:50 UTC | |
And here's the output with the modifications you suggested:
So, I am guessing that the grep wound up being less efficient than the 'band pass' filter I had set up. Weird. That said, I have not yet tried a combination of our two different methods. I have a feeling most of your comments are going to wind up helping, but that in this case, the nature of the data just lends itself better to the filter approach. I hope that I can still shed significant time off of this sub, so I'll post it to the "Seekers of Perl Wisdom" forum for additional help. Thank you so much though for taking the time to try and help me fix this. I really appreciate it. Matt | [reply] [d/l] [select] |
by BrowserUk (Patriarch) on Jan 23, 2007 at 15:57 UTC | |
by mdunnbass (Monk) on Jan 23, 2007 at 16:04 UTC | |