in reply to Delete duplicate data in file
This is example b) from perldoc -q 'How can I remove duplicate elements from a list or array?' of course it could be less efficient than using a single hash and reading line by line, but I think that Tie::File could give you other ideas about solving your problem, just thought that this might throw some new light on your problem.#!/usr/bin/perl use warnings; use strict; use Tie::File; tie @file,'Tie::File','myfile' or die "Can't tie file: $!"; undef %saw; @file = grep(!$saw{$_}++, @file); untie @file;
|
---|