in reply to Table shuffling challenge
UPDATE: Try my original approach, but just shuffle at the very last step!
my %data; <DATA>; while (<DATA>) { my ($row, @values) = split; $data{$row} = sum @values; } say $data{$_} for shuffle keys %data;
Maybe try something like the following. I start by making a random array of numbers equal to the number of data rows you have. Next I read the file and build a hash. The keys are shifted from the random array and the values are the row sums. Then iterating over the hash (sorted by keys) will let you do whatever you want with the permuted row sums.
#!/usr/bin/env perl use strict; use warnings; use feature 'say'; use List::Util qw(sum shuffle); my %data; my @row_nums = shuffle 1..5; <DATA>; while (<DATA>) { my ($old_row, @values) = split; my $new_row = shift @row_nums; $data{$new_row} = sum @values; } say $data{$_} for sort { $a <=> $b } keys %data; __DATA__ Head1 Head2 Head3 Head4 Head5 Head6 Head7 Head8 + Head9 Head10 Head11 1 0 1 1 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 1 0 0 3 1 0 0 0 1 0 0 0 1 0 4 0 1 1 1 1 0 0 0 0 1 5 0 0 0 0 0 0 0 1 0 0
Quick question: Are you really doing this for 1 million different tables or are you doing 1 million permutations of the same table? If the latter, just read in and sum the records once in the original order. This will result in a 110,000 element array of row sums that you can just shuffle a million times.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Table shuffling challenge
by glow_gene (Initiate) on Aug 23, 2013 at 20:37 UTC | |
by RichardK (Parson) on Aug 24, 2013 at 13:04 UTC | |
by glow_gene (Initiate) on Aug 28, 2013 at 01:52 UTC | |
by poj (Abbot) on Aug 23, 2013 at 22:11 UTC |