in reply to removing duplicates lines plus strings from a file

If you already have some code, show us please. If not, try the following recipe and if that fails, feel free to come back with a code sample that troubles you.

In case your only problem is to remove duplicate URLs, the following recipe might be helpful:

(1,2): Open another file for writing and print to filehandle unless output to STDOUT is sufficient.

Update: Or perform a Super Search with this query for more inspiration...

Update: In response to code presented below:

Replies are listed 'Best First'.
Re^2: removing duplicates lines plus strings from a file
by kirpy (Initiate) on Sep 18, 2011 at 20:20 UTC
    I tried doing the following ---
    my($url,$name,$text,@lines,$key,$value,$line); my($Docs) = "temp.txt"; #temp.txt is of the format --- #name1@url1@text1 #name1@url1@text1 #name1@url1@text11 #name2@url2@text2 #name2@url2@text21 #name3#url3@text3,etc... my %file_hash; open (FILE, $Docs); @lines = <FILE>; close FILE; foreach $line (@lines) { chomp($line); ($name,$url,$text) = split('@',$line); chomp($url); $key = $url; $value = $line; $file_hash{$key} = $value; } open (OUT, ">$Docs"); for my $key (keys %file_hash) { print OUT "$file_hash{$key}\n"; } close OUT; }

    my guess to what is happening here is that since I am using the hash - i am actually storing the last match to the $key and I am not getting what I want as the OUT file
    the OUT file that I need is ---
    #name1@url1@text1
    #name2@url2@text2
    #name3#url3@text3,etc...
    thank you all for your patience........
Re^2: removing duplicates lines plus strings from a file
by kirpy (Initiate) on Sep 19, 2011 at 06:47 UTC

    thank you for your comments (..hints).. I got that..& it works perfectly....just wanted to ask one more thing.. is it possible do the same using regex matching... will it be able to save the original ordering of the "Lines" that is lost by using Hash..