in reply to Bloom::Filter Usage

if you're running updates in batches don't forget about quickish stuff that might work.

perl -le 'for(1..30_000_000){$x=int(rand(30_000_000));print $x;}' >/tm +p/randnums time sort -n /tmp/randnums > /tmp/randnumssorted real 2m0.819s user 1m52.631s sys 0m2.798s # used about 200m memory time uniq -c /tmp/randnumssorted > /tmp/randuniq real 0m11.225s user 0m8.520s sys 0m1.019s time sort -rn /tmp/randuniq >/tmp/randuniqsort real 1m0.062s user 0m41.569s sys 0m3.125s head /tmp/randuniqsort 10 7197909 10 6080002 10 2718836 10 21596579 9 8257184 9 8116236 9 7721800 9 7706211 9 7657721 9 7490738

pull out your account numbers, sort/uniq to find duplicates. takes about 3 minutes and 200m memory.

there's nothing wrong with going over the file twice if it makes it easier to process.