in reply to Removing duplicates
Update: You might want to read perldoc -q duplicate for a general discussion of using a hash to check for duplicates.
I know giving a complete solution is sometimes frowned upon here, but sometimes I just can't help myself. Here is the first script:
#!/usr/bin/perl use strict; use warnings; my %nets; /^Net '([^']+)'/ and not $nets{$1}++ and print "$1\n" while <STDIN>;
And the second:
#!/usr/bin/perl use strict; use warnings; use File::Slurp; my $file2 = shift or die "Usage: $0 file2 < another-file"; my $nets = join "|", map {chomp;$_} read_file($file2); /$nets/ or print while <STDIN>;
And you can run the whole chain with something like:
perl script1.pl < file1 > file2 perl script2.pl file2 < another-file
Update: just for fun, here's a version that will do everything in one step:
#!/usr/bin/perl use strict; use warnings; use File::Slurp; my $file1 = shift or die "Usage: $0 file1 < another-file"; my $nets = join "|", map { chomp; /^Net '([^']+)'/ and $1 or () } read_file($file1); /$nets/ or print while <STDIN>;
Which would be used like:
perl script1+2.pl file1 < another-file
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Removing duplicates
by RCP (Acolyte) on Mar 03, 2004 at 11:50 UTC | |
by revdiablo (Prior) on Mar 03, 2004 at 19:28 UTC | |
by RCP (Acolyte) on Mar 04, 2004 at 12:33 UTC |