Still pretty weak on using hashes so I am hoping someone could please help. Im opening a file with domains in them and then pulling out only the root domain. Then I am trying to use the hash and grep to remove any duplicates but all I get back are blank lines.
#!/usr/bin/perl $upload = "/var/tmp/work/upload"; $work = "/var/tmp/work/"; $input3 = "$upload/domain.csv"; system ("dos2unix $input3"); open (IN,"$input3"); open (OUT,">>$work/local.rules"); while (<IN>) { chomp(); if ($_ =~ /^.+\.([A-Za-z0-9-_]+\.[A-Za-z]{2,})$/){ $domain = $1; %seen = (); @unique = grep { ! $seen{ $domain }++ } @array; print "@unique\n"; } }
In reply to remove duplicates with hash and grep by Smith
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |