I think you are headed towards a commercial relational database. re: This is because many nucleotides may point to the same results (they are highly depended).
Your dataset size is huge, but databases are good at saying stuff "X" belongs to both "A" and "B".
As just a simple primer and a "learn by doing with MySQL", try Learning SQL by Alan Beaulieu. This is just basic introductory stuff, but will give the bare basics of how relational tables interact. A huge commercial database will use SQL for the queries.
From what I've read above, your representation of the dataset just isn't going to work because no realistic even network of machines can implement this. You are going to need multiple "serious machines" and a lot more thought about the organization of the data and what you need to process and algorithms to process the data.
From your original post, I see 10GB of actual data. Other estimates I saw above are vastly greater. One approach would be to design for 10x what you have now and get that working (100GB). Jumping 2,3 or more orders of magnitude past the data you have now is unlikely to be successful (in my experience, 100x is often too large of a technology leap for "one shot"). Do 10x, learn stuff, then do another 10x.
|