in reply to Re^3: Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays
in thread Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays

Hey thanks for the explanation. I have some more questions as I am dealing with huge datasets.

1. I would like to know accordingly to you is their any faster way (in terms of time) of reading the file (1,000,000 rows X 1,000,000) and making the comparison of the rows. Also, the matrix which we are reading has values as 1 or 2 except the id values. So, if I make the change in the value where I convert the matrix as the set of binary values then will that help in increasing the speed of the program ?

2.If I use some other data structure like hash of hashes, will that improve the computing and reading time of the file.

3. The summary of my code: I am trying to read the data line by line (using while loop), I am transposing the data and writing it into another file because I think comparing the file row wise will be better than column wise since, it will take more time. I am then opening my transposed file I am using your code to compute/compare the rows and then writing the results on another file. I am actually looking for the fast of doing all the operations as it is currently taking more than 10 minutes for the computation of (10,000 rows X 3000 columns)on my 8 GB RAM computer.

4. Is their any way to vectorize the code, where I am not comparing or taking the difference between the two rows simultaneously and not element by element like we have apply command in the R language.

5. If I use the comparison in terms for the first 100 elements in the row and then next hundred like from 1-100 and 2-101 etc, will that help.

Thanks a lot for the help.

  • Comment on Re^4: Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays

Replies are listed 'Best First'.
Re^5: Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays
by BrowserUk (Patriarch) on Jan 26, 2010 at 23:54 UTC
    I would like to know accordingly to you is their any faster way (in terms of time) of reading the file (1,000,000 rows X 1,000,000) and making the comparison of the rows.

    ...

    ...taking more than 10 minutes for the computation of (10,000 rows X 3000 columns)on my 8 GB RAM computer.

    I'm sorry, but I really don't think you have any idea of the scale of what you are trying to do.

    First off, forget about the time taken to read the file. Perl can read 10,000 lines with 3,000 fields in just a few seconds. What is taking the time, is the combinatorial explosion of comparing each of those lines against each other.

    For 10,000 records, that's just under 50 million record-to-record comparisons. If you're doing that in 10 minutes. That's a bit over 80,000 per second.

    For 1 million unique records, that's 499,999,500,000 comparisons. Even ignoring that each of those 1 million records contain 333 times as many values to compare, that would still take 5e11/8e4 seconds.

    And that's over 72 years!

    Factor in that the records are 333 times bigger, and that LCSS is an O(n2) algorithm, and you're looking at many life times to process your data in this brute force fashion.

    Unless you have access to some seriously parallel hardware--millions of dollars worth, spawing a few perl threads isn't going to help at this scale--you need to seriously reconsider your approach to solving the underlying problem, because your current approach is never going to work.


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      Ok.. i got ur point. thxs a lot in answering my questions :)

        You're welcome :) But you shouldn't just give up.

        Just because your current methods are intractable, it doesn't mean that there isn't a viable approach to solving your underlying quest. It just means that you should look at different approaches to that problem.

        For example, the way you are currently storing your data--as tab-delimited streams of 1s and 2s is hugely wasteful of both storage and processing time. Your described 1e6 x 1e6 dataset file must be something close to 2TB (2e12) bytes of data.

        When you realise that the 1s & 2s can be represented by 0s & 1s, and can therefore be stored binary encoded, the size of your data file could be reduced to something like 116 GB. That's the same information, but less than 6% of the volume of data to read, store and manipulate.


        Or if, as your other thread suggests, that you're looking for positions where there is a minimum overlap of ten datapoints then that is only 1024 possible search strings. So, in a single pass, you could index all your records by the 1024 10-value substrings they contain. That might take 1/2 a day or so.

        Now, with the index, you can limit the number of records you need to search. Assuming a random distribution, that means you've reduced your problem by 3 orders of magnitude. Suddenly, 72 years becomes more like 25 days. Which is a whole heap more tractable!


        Of course, there are no guarantees at this stage that such a reduction is achievable, since we are still in the dark as the the reality of your actual problem. And unless you describe it, you will not be able to gain the benefit of the many clever minds here, considering what other methods might be fruitful.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
Re^5: Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays
by BrowserUk (Patriarch) on Jan 26, 2010 at 22:23 UTC

    How many unique IDs are there in those 1 million records?


    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      yeah there can be 1 million unique ids or may be there can be 500,000 ids where each id is repeated twice. In some cases I need to match the same ids also and in some cases I have to match the rows having same ids with the other rows i.e. the first row having a id 1 with first and second row having a id 2. Similarly, the second row having a id 1 with first and second row having a id 2. Thanks.