"Also it is huge file so i cannot use array or hash."
How huge?
Have you tried it with a hash - you might be surprised :)
Update: Note - as correctly pointed out by chrism01, the below won't work where you have odd numbers of duplicates. See below for a solution that I believe addresses that issue.
Give the following a go:
Output:#!/usr/bin/perl -w use strict; my %wanted; while (<DATA>) { exists $wanted{$_} ? delete $wanted{$_} : $wanted{$_}++; } print sort keys %wanted; __DATA__ a1a a1a b1b c1c c1c d1d d1d e1e f1f g1g g1g h1h h1h i1i j1j
b1b e1e f1f i1i j1j
Update: or as a one-liner:
perl -ne 'exists $x{$_}?delete $x{$_}:$x{$_}++;}{print for sort keys +%x;' < input.txt > output.txt
Try running that on your input file. The point about using a hash in that way is that you are only creating hash keys for those lines that are unique (and only appear once), so it's actually quite efficient. Whenever you are thinking "unique", a hash is almost certainly what you want.
Cheers,
Darren :)
In reply to Re: Read file line by line and check equal lines
by McDarren
in thread Read file line by line and check equal lines
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |