Thanks for pointing out the unnecessary work.
Can I ask you about the idea that I can decrease the size of the list with each word? The problem is that I need to get the total similarity for each word to every other word. So I don't think I can decrease the number of comparisons per step without storing preceding results, but the memory demands are huge, even if I delete items from memory once they are retrieved.
But if you see another solution, please let me know. Thank you for your help!
In reply to Re^2: Mysterious slow down with large data set
by jsmagnuson
in thread Mysterious slow down with large data set
by jsmagnuson
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |