in reply to Re^2: Seeking a fast sum_of_ranks_between function
in thread Seeking a fast sum_of_ranks_between function

I have gathered that you have two subsets, each with about a million data points. And that you say it is very slow. How slow is "very slow"? Can you give a time frame?

I am asking because it does not appear to me that it should be very slow (unless I missed a very time-consuming step in the algorithm description). It could be that the module is doing a number of things not really necessary in your specific case and that rolling out your own sub or set of subs might be faster, or that it could be optimized some other way. But is "very slow" is a few seconds or many hours? In the latter case (assuming I have understood what needs to be done), I am convinced it could be faster. A few seconds, I would not even try to defeat the module in terms or performance. In between, well, then, it is is your and our draw. Please give a better estimate of the time frame.

Of course, any further action would require a representative dataset (perhaps significantly smaller, but sufficient for benchmarking.

@ BrowserUK: thanks for asking, I thought about more or less the same things, but somehow did not dare to ask.

  • Comment on Re^3: Seeking a fast sum_of_ranks_between function

Replies are listed 'Best First'.
Re^4: Seeking a fast sum_of_ranks_between function
by msh210 (Monk) on Oct 14, 2015 at 14:00 UTC

    Hours.   I suspect that, as you suggest, "the module is doing a number of things not really necessary in [my] specific case".   Thanks for looking into this.