There are great replies above, but note that hash collisions *could* occur even in legitimate data.
If you're worried about correctness, I suggest the following for each file:
- Normalize the data
- Compute a hash (MD5, SHA-1, whatever)
- If this is a new hash, insert $hashcode => [ $data ] into the seen dictionary.
- If this hash *does* exist in the seen dictionary, make a full compare agaisnt all datums in @{ $seen{$hashcode} }. Reject input if it matches one of the existing datums; accept and push it to the dictionary if by (unlikely) coincidence it doesn't.
I'd definitely recommend this more correct approach over the straightforward one if you need to do all this repeatedly. If you're just checking your imported data once-off, then ignoring this issue is probably fine (just keep it in mind).
But now, I've a question: don't the result pages return some sort of ID for each query result? If so, and if you can't trust this ID to guarantee data uniqueness, you should not include it in the hash. Strip anything that's not hard data — less noise, better experiment.