This works and is very simple to understand. The efficiency, with the small numbers I've tried it on (<1e6), the creation of the tree takes a few minutes. Traversing it a few seconds.
The basic idea is that as you will have to store your data externally somewhere, why not use the filesystem. Take the up to 5-char strings (I've assumed that 2^40 is based on 256^5), split them into characters. Convert the characters to their 8-bit ascii values to avoid problems with restricted characters in pathnames. And then create a file at the resultant path to hold the counts.
Eg. string 12345 becomes path yourbase/48/49/50/51/52.count.
This file is opened/read/incremented/re-rewritten (or created and set to 1 first time).
Obtaining the counts for all paths below a certain prefix just become a recursive directory walk (though I've implemented it iteratively below), opening the files and reading the counts.
It takes options -N=nnn for the random counts to increment; -PRE=ccc for the prefix to traverse.
Set -N=1 to just traverse with different prefixes without create rafts of new entries.
The code:
#! perl -slw
use strict;
our $N ||= 1e3;
our $PRE ||= '12';
my $base = 'c:/test/700432/';
sub incPath {
my @path = unpack 'C*', $_[ 0 ];
my $full = $base;
mkdir $full .= $_ . '/' for @path[ 0 .. $#path -1 ];
$full .= $path[ -1 ] . '.count';
if( -e $full ) {
open my $fh, '+<', $full or die "$! : $full";
my $count = <$fh>;
seek $fh, 0, 0;
print $fh ++$count;
close $fh;
}
else {
open my $fh, '>', $full or die "$! : $full";
print $fh 1;
close $fh;
}
}
sub traversePrefix {
my( $path, $code ) = @_;
my @path = unpack 'C*', $path;
my $prefix = $base . join '/', @path;
return unless -e $prefix;
my @dirs = $prefix;
for my $dir ( @dirs ) {
for my $file ( glob $dir . '*' ) {
push( @dirs, $file . '/' ), next if -d $file;
open my $fh, '<', $file or die "$! : $file";
chomp( my $count = scalar <$fh> );
( $file ) = $file =~ m[^$base(.+).count$];
my $key = pack 'C*', split '/', $file;
$code->( $key, $count );
}
}
}
sub rndStr{ join'', @_[ map{ rand @_ } 1 .. shift ] }
## Generate $N keys (paths) and incr their counts
for ( 1 .. $N ) {
printf "\r$_\t";
my $key = rndStr 1+int( rand 5 ), map chr, 0..255;
incPath( $key );
}
## Traverse starting from $PRE and print out the keys and counts
traversePrefix $PRE, sub {
print "@_";
};
There are various way this could be sped up. Primary amoungst them would be to accumulate counts in a memory structure until that structure reached some preset size and then update the filesystem, discard the structure and start over.
The biggest problem with that would be determining the current size of the memory structure. Devel::Size will do it, but it carries a fairly heafty time penalty doing so. (If the was an 'I now my structure is non-self referencing so don't bother checking' option it could be speeded up.)
Another alternative would be to just accumulated a number of increments in memory before flushing to disk. That should have a dramatic benefit on performance without needing too much tuning. Hm. I might have a go at that later.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
|