A database may not be the best solution here -- from reading the other posts, it could be that you're going to be more interested in 'clumping' each of the data points together, creating 'neighborhoods' of 'nearest neighbors'. My Systems Design professor Ed Jernigan did research along those lines.
Perhaps a first cut would be some sort of encoding of each data point, then a 'clumping' based on that, with further analysis on the smaller 'clumps'.
In reply to Re^3: Huge data file and looping best practices
by talexb
in thread Huge data file and looping best practices
by carillonator
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |