in reply to Some code optimization
Reserving the right to change my mind if and when you post actual code, the simplest way of optimising your loop would (probably) be to in-line the bodies of the two functions in the loop.
Function call overhead is relatively high in Perl. If the function itself does very little--as appears to be the case from your description--then the overhead of setting up and tearing down the "stack frame" can be higher than that for the code inside. By in-lining the code, you avoid that overhead, and for small functions, there is little downside. Especially if this is the only place those subs are called.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Some code optimization
by almut (Canon) on Jun 17, 2010 at 16:34 UTC | |
Function call overhead is relatively high in Perl. I could be wrong, but if I understand the OP correctly, the function needs in the order of several seconds for just 50 calls. Perl's function call overhead may be high, but it shouldn't be that high... | [reply] |
by BrowserUk (Patriarch) on Jun 17, 2010 at 17:29 UTC | |
if I understand the OP correctly, Hence the reason for reserving the right to change my mind. I couldn't make head nor tails of this: I get 1 seconds per 50 iterations. Obviously better than the 28 seconds, but still significantly higher than the 9 seconds It also doesn't ring true that The function itself is quite simple - it gets 3 scalars (one of which is a field of some object), checks a couple of if's on the values and some basic math (including modulo %), and returns an array with a couple of hashes, each with two numerical fields. That's it. Would require 28 seconds/50 iterations. A tentative attempt to match the description yields :
I appreciate my sub is guesswork and nothing like the real thing, but given the description, it is hard to guess what else that description is hiding that causes the sub to take a million times longer? Of course, it'll turn out that he's tallying the national debt. Or crowd sourcing the math to Twitter. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
|
Re^2: Some code optimization
by roibrodo (Sexton) on Jun 18, 2010 at 07:10 UTC | |
all commented parts commented (i.e. full loop): total loop time: 63.6951129436493 seconds with the second next uncommented: total loop time: 15.1547348499298 seconds with the second next uncommented and return []; in sub gene_to_legal_range also uncommented: total loop time: 6.83389496803284 seconds with the first next uncommented: total loop time: 4.58600687980652 seconds | [reply] [d/l] [select] |
by davido (Cardinal) on Jun 18, 2010 at 08:03 UTC | |
In your initial question you stated, "The function itself is quite simple - it gets 3 scalars (one of which is a field of some object), checks a couple of if's on the values and some basic math (including modulo %), and returns an array with a couple of hashes, each with two numerical fields. That's it." But then the code you presented here shows a quagmire of complexity of loops within loops, greps within loops (which is another form of loop), sorting within loops (which implies more loops), and so on. Having first read your initial question (with no code posted), and then later reading the code you posted, I couldn't believe my eyes. My first thought was, "This must be a practical joke. We're being suckered." What you're calling "checking a couple of ifs and some basic math" is actually nested loops with sorting and greping inside of them. It couldn't get much worse from an efficiency standpoint than that. Let's imagine a really simple case in this code fragment:
That loop executes in O(n) time. Now consider this loop:
That chunk of code executes in O(n^2) time. That means that for 'n' there are n^2 iterations. If your code stopped there, you would still be questioning why it's taking so long. But it doesn't. I didn't wander through every level of looping, but I think your efficiency is going quadratic, which is to say, inefficient. I don't know how you could equate multiple layers of nested loops with "The function itself is quite simple..." These are mutually exclusive conditions. It could be there is a more efficient algorithm out there to solve the problem you're tackling. OR, it could be that you have one of those problems for which there is no efficient solution. If that's the case, thank goodness you've got a computer to do it for you. ;) Dave | [reply] [d/l] [select] |
by roibrodo (Sexton) on Jun 18, 2010 at 08:32 UTC | |
Let's look at a single iteration and define n as the number of genes. A gene is a start point + length, i.e. n is the number of elements pushed in the creation of $simulation_h (i.e. n=70,000 in the example). The main loop is indeed a nested loop, but it simply gets a single gene at a time from the hash, hence this double loop itself takes O(n) (before doing anything). Now, let's look at what is done inside the loop. We call two "problematic" subroutines, which in turn, call some other subroutines. The "loops" you see in the other subroutines are used to deal with the case when genes are split (start too close to the "end" of a circular the chromosome, thus have to be represented by two ranges instead of one). A range can therefore be either a singleton (one array element) or "split" (two array elements). So, yes, there are loops, but they sure don't add another factor of n to the complexity, since these loops are bounded by a small constant -- each loop is iterated for maximum of 2 iterations -- so even the double loop that checks a pair of ranges against each other (each range is "split" in the worse case scenario) is iterated for maximum of 4 iterations. To conclude, the complexity is actually O(n) per iteration (again, n is the number of genes). | [reply] [d/l] |
by davido (Cardinal) on Jun 18, 2010 at 08:52 UTC | |
by roibrodo (Sexton) on Jun 18, 2010 at 09:11 UTC | |
by graff (Chancellor) on Jun 18, 2010 at 08:51 UTC | |
When I ran the original code on my macbook, reported a total loop time of 95 sec. After making the few changes mentioned above (remove a redundant function call, don't push undef onto an array, remove the grep that filters out the undef items that aren't being pushed now), the total loop time dropped to 80 sec. If I also move the substance of the "intersect_simple_ranges()" function into the one block where that function is called (i.e. eliminate that one function call), it drops to 78 sec. Changing the {FROM, TO} AoH to a (less legible) AoA trimmed it to 75. Beyond that, I don't see anything obvious, but I haven't taken the time to grok the overall algorithm (let alone comprehend the ultimate goal). There might be easier ways to accomplish whatever you're doing here, e.g. by perhaps using a simpler data structure. UPDATE: Something to consider would be whether there's a way to make sure the required sorting is handled somewhere other than the inner-most loop, if at all possible. That is, can the algorithm be made to work in such a way that the data is sorted once before going into a given loop, thereby making it unnecessary to do repeated sorts within the loop? | [reply] [d/l] [select] |
by roibrodo (Sexton) on Jun 18, 2010 at 09:24 UTC | |
Thank you, I will look into it. But I think the problem starts with the first problematic function (as the name suggests :)) - gene_to_legal_range. When I next just before it I run 6 times faster than if I next just after it (~4 seconds compared to ~27 seconds for 50 iterations). This subroutine really looks simple to me, so I don't see why it has to take so much time (subroutine overhead?!). p.s. I now run my home PC and the benchmarking results are a little different (but the relative differences are similar). UPDATE Almost always the sort is done on an array with a single element (there are cases where there are more). I even tried removing the sort at all, just to see the effect on running time - there is no change. But, again, I suggest we start with the first, more simple subroutine. Is it really just perl subroutine overhead? Is there anything I can do about it? | [reply] [d/l] [select] |
by graff (Chancellor) on Jun 18, 2010 at 09:46 UTC | |
by roibrodo (Sexton) on Jun 18, 2010 at 10:03 UTC | |
| |