in reply to Re^3: Huge data file and looping best practices
in thread Huge data file and looping best practices
The reduction was easy in STATA, sorting all the data by characteristic columns (as if they were one big binary number), then eliminating duplicates. Down to 400,000 lines of data.
Now, I've tried adapting your code and can't seem to get it to work. The XOR appears to be working, but somewhere between packing and unpacking something goes wrong.
Here is a simplification I've written with two lines of data and 8 characteristics:
Which returns:#!/usr/bin/perl use strict; my (@patNos, @data) = 0; my $line1 = "44444,1,1,0,0,0,1,1,1"; my @bits1 = split ',',$line1; $patNos[0] = shift @bits1; $data[0] = pack 'b8', @bits1; my $line2 = "55555,0,1,1,0,0,1,0,1"; my @bits2 = split ',',$line2; $patNos[1] = shift @bits2; $data[1] = pack 'b8', @bits2; print "$patNos[0] - @bits1\n"; print "$patNos[1] - @bits2\n"; my $line1 = unpack 'b8', $data[0]; my $line2 = unpack 'b8', $data[1]; my $variance = unpack '%32b*', ($data[0] ^ $data[1]); print "\nline 1: $line1\n"; print "line 2: $line2\n"; print "\nvariance: $variance\n";
I would expect $line1 and $line2 to contain the original '0' and '1' strings as characters, but they just return all '0's except for the lone '1' in $line1. I've been reading about pack and unpack all day, but can't figure out what I'm doing wrong. I also realized that if I do pack 's', the variance is calculated correctly, but I would expect this to stop working once there were more than 16 characteristics.44444 - 1 1 0 0 0 1 1 1 55555 - 0 1 1 0 0 1 0 1 line 1: 10000000 line 2: 00000000 variance: 1
I love the idea of making one huge bitstring and then using substr for the comparisons. What is the best way to concatenate the bitstrings? I'm assuming then that substr can work with a bitstring the same as it would a regular string?
Also, a few questions:
What is the significance of 32 here? If I'm using a 64 bit machine, should I change it to 64? I'm writing this on a 32 bit, but it will run on a 64 bit?unpack '%32b*', ( $data[ $first ] ^ $data[ $second ] );
Why use 5.010?
What is the advantage of setting the length of the @patNos and @data arrays at the start?
Is this just printing a status update every 1000 lines?print "\r$.\t" unless $. % 1000;
What is this doing?say "\n", time;
THANK YOU!!!
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^5: Huge data file and looping best practices
by BrowserUk (Patriarch) on Apr 28, 2009 at 19:31 UTC |