in reply to Re: Mysterious slow down with large data set
in thread Mysterious slow down with large data set
Thanks for pointing out the unnecessary work.
Can I ask you about the idea that I can decrease the size of the list with each word? The problem is that I need to get the total similarity for each word to every other word. So I don't think I can decrease the number of comparisons per step without storing preceding results, but the memory demands are huge, even if I delete items from memory once they are retrieved.
But if you see another solution, please let me know. Thank you for your help!
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Mysterious slow down with large data set
by wwe (Friar) on Feb 27, 2012 at 14:29 UTC |