One of the standard tricks for detecting duplicates is by using a hash:
The first time username foo is found, $users{'foo'} does not exist, so the record is printed out, and $users{'foo'} is set to 1. The second time username foo is found, $users{'foo'} is now 1, so the record is skipped.# Read passwd files from @ARGV, outputting one record per # username my %users; while (<>) { $user = split ":",$_,1; next if exists $users{$user}; print; $users{$user} = 1; }
You didn't want to skip records, but rather rekey the records, so that the duplicates have different keys -- you wanted your outputted flat-file to have unique keys. Doing that with the passwd example:
The $_ .= "a" , redo statement that replaces the next appends "a" to the incoming line and then reprocesses it. I decided to reprocess it rather than just add "a" because if I had three user "foo" in the input files, I wouldn't want to get users "foo", "afoo", and "afoo", I'd want "foo", "afoo", "aafoo".# Read passwd files from @ARGV, outputting one record per # username. Duplicate usernames are renamed to be unique my %users; while (<>) { $user = split ":",$_,1; # next if exists $users{$user}; $_ = "a$_" , redo if exists $users{$user}; print; $users{$user} = 1; }
I hope this helps.
In reply to Re: Search for dupe && append char if found
by BlaisePascal
in thread Search for dupe && append char if found
by Limo
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |