in reply to Re: Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays
in thread Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays

Hi,

I didn't understand following parts of your program:

1.

push @{ $data{ $cols[ 0 ] } }, pack 'C*', @cols[ 1 .. $#cols ];

2.

my $mask = $keyset1 ^ $keyset2; next unless 1+ index $mask, chr( 0 ); printf "%3d : %3d : ", $key1, $key2; print join ', ', map{ substr( $mask, $_, 1 ) eq chr(0) ? $_+1 : () } 0 .. length( $mask )-1;

I didn't understand the concept of map and $mask. When I trying to print the hash of arrays it is not giving me any numbers. Can you please help with it. Also, how can I use String::LCSS_XS on the hash of arrays (%data) to get the longest common substring between the various of rows apart from the ones having same id number.

Thanks.
  • Comment on Re^2: Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays
  • Select or Download Code

Replies are listed 'Best First'.
Re^3: Extract the odd and even columns seperately in the hash of arrays or some other data structure apart from arrays
by BrowserUk (Patriarch) on Jan 25, 2010 at 20:26 UTC
    1. push @{ $data{ $cols[ 0 ] } }, pack 'C*', @cols[ 1 .. $#cols ];

      All your data values appear to be small positive integers. so rather than store them as an array, I chose to save some space (and given the size of the dataset you mentioned), and gain some considrable performance, by storing them as an array of chars. Ie. As strings.

      This will only work if all your values are in the range 0 .. 255. However, you could also use pack 'S' for values 0 .. 65535 to gain similar benefits. If all your values are very small, then you might pack them even more tightly, though the benefits are less easily achieved.

      The benefits of the compacted representation include:

      • Space:

        An ordinary array of 1000 integers requires just under 32k on my 64-bit system. A thousand byte string requires just 1056 bytes.

        That's less than 4% of the space requirement.

      • Speed:

        I can detect if any two corresponding bytes in two 1000-bytes string are equal in 0.000003 seconds.

        Doing the same thing using two 1000-element arrays takes 0.00025 seconds.

        That's just 1.2% of the time. Or over 80 times faster.

    2. my $mask = $keyset1 ^ $keyset2;

      This exclusive-ORs the two arrays of ints being compared, together (as strings). This effectively compares each corresponding pair of integers in both arrays (strings) in a single, very fast operation. If the two integers are equal, then the corresponding byte in $mask will be set to chr(0). If the are different, the corresponding byte will be non-zero.

      This allows a a very fast check for there being any correspondance between the arrays...

    3. next unless 1+ index $mask, chr( 0 );

      This uses index to search $mask for any zero byte. If none is found, then it skips to the next pair of arrays.

      If one (one more) zero bytes are found...

    4. printf "%3d : %3d : ", $key1, $key2;

      Print the ids of the two datasets being compared and ...

    5. print join ', ', map{ substr( $mask, $_, 1 ) eq chr(0)  ? $_+1 : () } 0 .. length( $mask )-1;

      Print a list of all the column (array index+1) positions where the corresponding integers are equal.

      Ie. For each character position in the mask, if the byte equals chr(0), then the corresponding integers in the datasets are a match. As the mask indexes are zero-based, but the column indexes are 1-based, add 1 to each matching index.

      join them with commas and print them out.

    I hope that will clarify both the logic, and the reasoning behind the code.

    Also, how can I use String::LCSS_XS on the hash of arrays (%data) to get the longest common substring between the various of rows apart from the ones having same id number.

    If all your integers are in the range 0 .. 255, and can therefore be packed into strings, then it is very fortuitous, as String::LCSS_XS only operates upon strings :)

    It also makes modifying my posted code to use it very simple:

    #! perl -slw use strict; use String::LCSS_XS qw[ lcss ]; my %data; while( <DATA> ) { chomp; my @cols = split ' '; push @{ $data{ $cols[ 0 ] } }, pack 'C*', @cols[ 1 .. $#cols ]; } my @keys = sort{ $a <=> $b } keys %data; for my $i ( 0 ..$#keys ) { my $key1 = $keys[ $i ]; for my $keyset1 ( @{ $data{ $key1 } } ) { for my $key2 ( @keys[ $i+1 .. $#keys ] ) { for my $keyset2 ( @{ $data{ $key2 } } ) { my( $s, $i1, $i2 ) = lcss( $keyset1, $keyset2 ); printf "%4d(%4d) - %4d(%4d) : %s\n", $key1, $i1, $key2, $i2, join ', ', unpack 'C*', $s; } } } } __DATA__ 12 1 2 1 2 1 1 1 1 12 2 1 2 2 1 1 1 1 15 2 1 2 2 1 1 1 1 15 2 1 2 1 1 2 1 1 16 2 1 2 1 1 2 1 1 16 2 1 2 2 1 1 1 1 19 2 1 2 1 1 2 1 1 19 1 2 1 2 1 1 1 1 116 1 2 2 2 1 1 1 1 116 2 1 2 1 1 2 1 1

    Which produces this from your sample input:

    c:\test>819256 ## Note that the offsets are zero-based excluding the ID field #id1 offset1 id2 offset2 values 12( 3) - 15( 3) : 2, 1, 1, 1, 1 12( 1) - 15( 0) : 2, 1, 2, 1, 1 12( 1) - 16( 0) : 2, 1, 2, 1, 1 12( 3) - 16( 3) : 2, 1, 1, 1, 1 12( 1) - 19( 0) : 2, 1, 2, 1, 1 12( 0) - 19( 0) : 1, 2, 1, 2, 1, 1, 1, 1 12( 3) - 116( 3) : 2, 1, 1, 1, 1 12( 1) - 116( 0) : 2, 1, 2, 1, 1 12( 0) - 15( 0) : 2, 1, 2, 2, 1, 1, 1, 1 12( 0) - 15( 0) : 2, 1, 2 12( 0) - 16( 0) : 2, 1, 2 12( 0) - 16( 0) : 2, 1, 2, 2, 1, 1, 1, 1 12( 0) - 19( 0) : 2, 1, 2 12( 3) - 19( 3) : 2, 1, 1, 1, 1 12( 2) - 116( 2) : 2, 2, 1, 1, 1, 1 12( 0) - 116( 0) : 2, 1, 2 15( 0) - 16( 0) : 2, 1, 2 15( 0) - 16( 0) : 2, 1, 2, 2, 1, 1, 1, 1 15( 0) - 19( 0) : 2, 1, 2 15( 3) - 19( 3) : 2, 1, 1, 1, 1 15( 2) - 116( 2) : 2, 2, 1, 1, 1, 1 15( 0) - 116( 0) : 2, 1, 2 15( 0) - 16( 0) : 2, 1, 2, 1, 1, 2, 1, 1 15( 0) - 16( 0) : 2, 1, 2 15( 0) - 19( 0) : 2, 1, 2, 1, 1, 2, 1, 1 15( 0) - 19( 1) : 2, 1, 2, 1, 1 15( 2) - 116( 3) : 2, 1, 1 15( 0) - 116( 0) : 2, 1, 2, 1, 1, 2, 1, 1 16( 0) - 19( 0) : 2, 1, 2, 1, 1, 2, 1, 1 16( 0) - 19( 1) : 2, 1, 2, 1, 1 16( 2) - 116( 3) : 2, 1, 1 16( 0) - 116( 0) : 2, 1, 2, 1, 1, 2, 1, 1 16( 0) - 19( 0) : 2, 1, 2 16( 3) - 19( 3) : 2, 1, 1, 1, 1 16( 2) - 116( 2) : 2, 2, 1, 1, 1, 1 16( 0) - 116( 0) : 2, 1, 2 19( 2) - 116( 3) : 2, 1, 1 19( 0) - 116( 0) : 2, 1, 2, 1, 1, 2, 1, 1 19( 3) - 116( 3) : 2, 1, 1, 1, 1 19( 1) - 116( 0) : 2, 1, 2, 1, 1
    HTH.

    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.

      Hey thanks for the explanation. I have some more questions as I am dealing with huge datasets.

      1. I would like to know accordingly to you is their any faster way (in terms of time) of reading the file (1,000,000 rows X 1,000,000) and making the comparison of the rows. Also, the matrix which we are reading has values as 1 or 2 except the id values. So, if I make the change in the value where I convert the matrix as the set of binary values then will that help in increasing the speed of the program ?

      2.If I use some other data structure like hash of hashes, will that improve the computing and reading time of the file.

      3. The summary of my code: I am trying to read the data line by line (using while loop), I am transposing the data and writing it into another file because I think comparing the file row wise will be better than column wise since, it will take more time. I am then opening my transposed file I am using your code to compute/compare the rows and then writing the results on another file. I am actually looking for the fast of doing all the operations as it is currently taking more than 10 minutes for the computation of (10,000 rows X 3000 columns)on my 8 GB RAM computer.

      4. Is their any way to vectorize the code, where I am not comparing or taking the difference between the two rows simultaneously and not element by element like we have apply command in the R language.

      5. If I use the comparison in terms for the first 100 elements in the row and then next hundred like from 1-100 and 2-101 etc, will that help.

      Thanks a lot for the help.

        I would like to know accordingly to you is their any faster way (in terms of time) of reading the file (1,000,000 rows X 1,000,000) and making the comparison of the rows.

        ...

        ...taking more than 10 minutes for the computation of (10,000 rows X 3000 columns)on my 8 GB RAM computer.

        I'm sorry, but I really don't think you have any idea of the scale of what you are trying to do.

        First off, forget about the time taken to read the file. Perl can read 10,000 lines with 3,000 fields in just a few seconds. What is taking the time, is the combinatorial explosion of comparing each of those lines against each other.

        For 10,000 records, that's just under 50 million record-to-record comparisons. If you're doing that in 10 minutes. That's a bit over 80,000 per second.

        For 1 million unique records, that's 499,999,500,000 comparisons. Even ignoring that each of those 1 million records contain 333 times as many values to compare, that would still take 5e11/8e4 seconds.

        And that's over 72 years!

        Factor in that the records are 333 times bigger, and that LCSS is an O(n2) algorithm, and you're looking at many life times to process your data in this brute force fashion.

        Unless you have access to some seriously parallel hardware--millions of dollars worth, spawing a few perl threads isn't going to help at this scale--you need to seriously reconsider your approach to solving the underlying problem, because your current approach is never going to work.


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.

        How many unique IDs are there in those 1 million records?


        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.