Random Walk is quite right... This is just to further demonstrate the case where the file is too big to hold in memory.
If each of your 'blocks' begin with an @:1107532976::1-like pattern, and each of those are expected to be unique, then just focus on the lines that contain that pattern:
use strict; use warnings; my %entries = (); while (<DATA>) { chomp; if (/^\@:\d+/) { #if line starts with aht-colon-digit(s) if (exists $entries{$_}) { #$_ is the current line #$. is the current line number print "Line $. duplicates line $entries{$_} [$_]\n"; } else { $entries{$_} = $.; } } } 1; __DATA__ @:1107530184::1 kkkkkkkkkkkkmkmkmk kkkkkk confused.gif @:1107530257:1107530439:1 kmkmkm <br>kmkmkm <br> <br>Fri Feb 4 10:17:37 2005 <br> mad.gif @:1107530709::1 ygyg ygygygyg lol.gif @:1107530717::1 ygyg ygygygyg lol.gif @:1107530963::1 cool help cool.gif @:1107532649:1107532689:1 k <br>kkkkkkkkkkkkkkkkk <br> <br>Fri Feb 4 10:57:29 2005 <br> lol lol.gif @:1107530257:1107530439:1 kmkmkm <br>kmkmkm <br> <br>Fri Feb 4 10:17:37 2005 <br> mad.gif @:1107532758::1 lll Lets mad.gif @:1107532976::1 lll Lets mad.gif
In reply to Re^2: Regular Expression to find duplicate text blocks
by Art_XIV
in thread Regular Expression to find duplicate text blocks
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |