in reply to Removing duplicates from list

All you need to do is putting the names in a hash and print them out later (the quick fix) -
open (Mer, "Extracted.c") %vars; for $mer(<Mer>){ $control=0; while ($control!=15){ $mer =~ s/ //; $control++; } $vars{$mer} = 1; } print "$_" foreach(keys %vars);
Another method is to use the hash to eliminate duplicates on the fly -
open (Mer, "Extracted.c") %vars; for $mer(<Mer>){ $control=0; while ($control!=15){ $mer =~ s/ //; $control++; } next if $vars{$mer}; $vars{$mer} = 1; print $mer; }
The following is how I would do this -
use strict; use IO::File; my $Mer = new IO::File "Extracted.c", "r" or die "Can not open file!"; my %vars; while my $mer (<$Mer>) { ... }
Try to use IO::File to open a file in perl. It's the preferred method. ;-)

Replies are listed 'Best First'.
Re: Re: Removing duplicates from list
by ysth (Canon) on Oct 14, 2003 at 09:36 UTC
    That last statement is pretty odd. open() has had new enhancements in just about every perl release... why would that happen if it were deprecated?
Re: Re: Removing duplicates from list
by vek (Prior) on Oct 15, 2003 at 00:32 UTC

    Try to use IO::File to open a file in perl. It's the preferred method. ;-)

    It is? It may be one method but I don't think it's the preferred method.

    -- vek --