in reply to searching for unique numbers into a string

How slow? The following shows that it takes an average of 0.7/1000th of a second to dedup a 4k line:

#! perl -slw use 5.010; use strict; use Time::HiRes qw[ time ];; sub uniq{ my %uniq; undef @uniq{ @_ }; keys %uniq } my @lines; for ( 1 .. 1e3 ) { my $line = int( rand 1e6 ); $line .= chr(9) . int( rand 1e6 ) while length( $line ) < 4096; push @lines, $line; } my $start = time; $_ = join chr(9), uniq( split chr(9), $_ ) for @lines; my $stop = time; printf "On random 4k lines, uniquing averaged %.6f seconds/line\n", ($stop - $start) / 1e3; __END__ c:\test>junk On random 4k lines, uniquing averaged 0.000695 seconds/line c:\test>junk On random 4k lines, uniquing averaged 0.000696 seconds/line c:\test>junk On random 4k lines, uniquing averaged 0.000712 seconds/line c:\test>junk On random 4k lines, uniquing averaged 0.000700 seconds/line

Which means processing your 3e6 lines should take around 35 minutes (plus a bit for the file processing).

So how fast do you want it to run? Or more likely, what are you doing in your full code that is slowing things down?


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."