in reply to End of the Time Out Tireny

How are you processing the list so that 100 addresses take 1 seconds to find duplicates? If you use a hash, Perl should find duplicates in 100 lines in milliseconds. With the whole file, you might have to worry about memory usage. Without swapping, the processing should be very fast.
my %seen; while (my $line = <$fh>) { chomp($line); print "$line\n" unless seen{$line}; $seen{$line}++; }

Replies are listed 'Best First'.
Re: Re: End of the Time Out Tyranny
by TIURIC (Initiate) on Feb 06, 2004 at 17:52 UTC
    Actually it does not take 1 second, I give the refresh a one second pause bettween each page. I should have been more clear. I am definitely not writing a SPAM program. I'm not a fan of SPAMMERs. Thanks for the enlightenment. TIURIC