50 thousand is better than several million... If I were you my strategy would be:
- Normalize capitalization: lc
- Normalize white space: s/^\s+|\s+$//g; s/\s+/ /g
- Save all strings which are normalized into the same thing in a single place (tied MLDBM or in memory hash, push @{ $db{$normalized} }, $original)
- use Digest::Nilsimsa to hash all the keys, index using e.g. this method, as discussed in this thread (the original author may have more insight). When indexing this way, chunk the nilsimsa hash into something which will allow you to bunch items together.
- iterate the nilsimsa keys, and display any group of original texts whose edit distance (String::Approx) is too great. The user can then decide whether these are duplicate or not. Entries with only one item are, ofcourse, omitted.
- Employ a system where human categorized items are remembered, undos are easy to increase the efficiency of the human assisted process
- A technique like Apple's Aperture's stack feature (see the shiny publicity videos) can be used - readkey on two hotkeys, and use them to increase or decrease the tolerance of the nilsima hash difference or edit distance, in order to partition into groups easily.
- For each chunk of computer or human identified duplicates find the canonical version you would like to use, and insert it into the database.