Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi Monks!
I have a simple form that takes some directories path and save it to a text file, my goal is to check the user input to make sure he/she doesn't input the same path twice. I am having a little problem with this, here is the code I have. If someone have a better way for me to do this will be great.
Here is some of the code that does that:
open(INPUTFILE, "$database") or die "cannot open file: $!"; while (<INPUTFILE> ) { my $path_check=$_; #It should have all the entry from the db text fil +e chomp($path_check); # $input{'nfirst'}) is one of the parameters from the form so it the +value of $input{'slast'} if( $path_check eq $input{'nfirst'}) || ($path_check eq $input{'slas +t'}) ){ print "<b>Dont Print entry to db text file and exit the loop here< +/b>"; }else{ print "Print New Value(s) to db text file here and exit the loop</ +b><br>";} } close(INPUTFILE); _db sample D:/myfiles/Test/ E:/pics/cgi-bin/new/ C:/temp/ D:/new/files/ F:/new/more/ C:/test/cgi-bin/ac/^ok C:/new/files/ # This line shouldn't be allowed here, it is duplicate +d.

Thanks a lot!!!

Replies are listed 'Best First'.
Re: Entry Integrity
by McDarren (Abbot) on Jun 02, 2006 at 14:36 UTC
    "..to make sure he/she doesn't input the same path twice.."

    Whenever you talk about _unique_ _anything_ in Perl, then it usually means that you need a hash.

    my %unique_paths = (); while (<DATA>) { chomp; my ($drive, $path) = split /:/; $unique_paths{$path}++; } for (keys %unique_paths) { print "$_\n"; } __DATA__ D:/myfiles/Test/ E:/pics/cgi-bin/new/ C:/temp/ D:/new/files/ F:/new/more/ C:/test/cgi-bin/ac/ C:/new/files/
Re: Entry Integrity
by CountZero (Bishop) on Jun 02, 2006 at 15:23 UTC
    This is a job for List::MoreUtils:
    uniq LIST Returns a new list by stripping duplicate values in LIST. The order of elements in the returned list is the same as in LIST. In scalar context, returns the number of unique elements in LIST.
    my @x = uniq 1, 1, 2, 2, 3, 5, 3, 4; # returns 1 2 3 5 4 my $x = uniq 1, 1, 2, 2, 3, 5, 3, 4; # returns 5
    And as you know you can slurp the whole of the input-file at once by assigning directly to an array.

    use strict; use List::MoreUtils qw/uniq/; my @all_no_dup = uniq <DATA>; print @all_no_dup; __DATA__ 1 5 9 4 8 1 5 66 57 2 9

    CountZero

    "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law

Re: Entry Integrity
by jesuashok (Curate) on Jun 02, 2006 at 14:13 UTC
    Hi

    Sample code avoid duplicate in the way you like.

    %avoid_dup = (); while ( <DATA> ) { my ($drive,$name) = split(/:/); if ( $avoid_dup{$name} ne '1' ) { #do some stuufs $avoid_dup{$name} = 1; } } _DATA_ D:/myfiles/Test/ E:/pics/cgi-bin/new/ C:/temp/ D:/new/files/ F:/new/more/ C:/test/cgi-bin/ac/^ok C:/new/files/ # This line shouldn't be allowed here, it is duplicate +d.

    "Keep pouring your ideas"
    A reply falls below the community's threshold of quality. You may see it by logging in.
Re: Entry Integrity
by girarde (Hermit) on Jun 03, 2006 at 19:58 UTC
    My approach in this case is to take potentially duplicate inputs and make them hash keys, like so:
    while ($i = <STDIN>) { chomp $i; if ($i) { $buffer{$i} = 1; } else { last; } }

    and then output with

    sort keys %buffer.

    On systems without case sensitivity of paths you should uc or lc the inputted paths as well.