The following should be fast, it allows arbitrary large files, and it preserves order:
perl -ne'print "$.\t$_"' input.txt \ | sort -k 2 \ | uniq -f 1 \ | sort \ | cut -f 2- \ > output.txt
In reply to Re: Filtering very large files using Tie::File
by ikegami
in thread Filtering very large files using Tie::File
by elef
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |