I would like to know accordingly to you is their any faster way (in terms of time) of reading the file (1,000,000 rows X 1,000,000) and making the comparison of the rows....
...taking more than 10 minutes for the computation of (10,000 rows X 3000 columns)on my 8 GB RAM computer.
I'm sorry, but I really don't think you have any idea of the scale of what you are trying to do.
First off, forget about the time taken to read the file. Perl can read 10,000 lines with 3,000 fields in just a few seconds. What is taking the time, is the combinatorial explosion of comparing each of those lines against each other.
For 10,000 records, that's just under 50 million record-to-record comparisons. If you're doing that in 10 minutes. That's a bit over 80,000 per second.
For 1 million unique records, that's 499,999,500,000 comparisons. Even ignoring that each of those 1 million records contain 333 times as many values to compare, that would still take 5e11/8e4 seconds.
And that's over 72 years!
Factor in that the records are 333 times bigger, and that LCSS is an O(n2) algorithm, and you're looking at many life times to process your data in this brute force fashion.
Unless you have access to some seriously parallel hardware--millions of dollars worth, spawing a few perl threads isn't going to help at this scale--you need to seriously reconsider your approach to solving the underlying problem, because your current approach is never going to work.
In reply to Re^5: Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays
by BrowserUk
in thread Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays
by snape
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |