in reply to Re^2: Huge data file and looping best practices
in thread Huge data file and looping best practices

And now for the ultimate speedup.

Instead of building an array of 8 million, 60-byte bitstrings and performing 32e12 xors and bitcounts, you build a single 480 MB bitstring by concatenating all the packed data.

Now, you can perform the XOR of records 0 through 7,999,999 with records 1 through 8,000,000 in one operation using substr to select the appropriate chunks of the big string. And then use substr with unpack to count the bits in each 60 byte chunk of the result.

Then you repeat the process with substrs covering 0 .. 7,999,998 & 2 .. 8,000,000. Rinse repeat till done.

8e6 XORs and 32e12 bitcounts and done.

My best guestimate is that instead of the OP code requiring upwards of 10 days on 8 cores, this'll take less than half a day on one core.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."
  • Comment on Re^3: Huge data file and looping best practices