in reply to Re: Re: Removing duplicates in large files
in thread Removing duplicates in large files

What server? You didn't say anything about a server in your initial description. What is the CGI process doing? A 120,000 line file is not that big. Perl should be able to tear through it in a couple of seconds.

The issue is probably somewhere else, either a slow algorithm or network issues. Since you didn't say where the script is running, how it is getting the data, or how it is communicating with the user, we can't help you.

  • Comment on Re: Re: Re: Removing duplicates in large files