in reply to Delete Duplicate Entry in a text file

Here it is with some common tools. The concepts could be translated into Perl code. The grep takes out the blank lines, the uniq removes consecutive duplicates, and then sed adds an extra newline to the end of each line to put the blank lines back in again. Although in Perl it might be easier to make a blank line your input record separator, and then you'd just have to drop consecutive duplicates.

grep -v ^$ fail.txt | uniq | sed 's/$/\n/'

Aaron B.
Available for small or large Perl jobs; see my home node.