in reply to Re: Re: Re: Comparing two files
in thread Comparing two files

It's only fair to follow up on my own question. Thanks to "dragonchild" for the insights and tips on how to solve my issue. The good news is that I have solved that issue but not the way you showed me (hashes). I know, some will say that that would be the best way to approach it but at this point, at least, I try to use the skills I know to their fullest plust I was in a time crunch so could not spend to much time on disecting hashes and making them work. In any case, here is my final code that works:

foreach (@matched) { my @record = split(/,/, $_); my $lic = $record[$#record]; chomp($lic); my ($lname, $fname) = split(/\s+/, uc($record[5])); my $name = "$lname " . substr($fname, 0, 2); # If we find a matching record, then we write it out to a file if (grep /$name/, @phoneBook) { my @line = grep /$name/, @phoneBook; foreach my $rec (0 .. $#line) { if ($rec == 0) { # In case I had more than one record (lik +e the same user multiple times), I only limit results to 1 @line = split(/,/, $line[$rec]); + print RESULTS "$record[0],UNKNOWN,$lic,SLC,$line[0],$l +ine[1],$line[2],$line[3],$line[4],$name\n"; } } } else { chomp(@record); print UNMATCHED "$record[0],$record[5],$record[7],$record[2],$ +record[3],$record[4],UNKNOWN,\\N,\\N\n"; } }
A use of PERL's grep function solved my problem in a couple of lines rather than trying to write functions and use hashes.... :-)

But again, thanks for all your help!

Replies are listed 'Best First'.
Re(5): Comparing two files
by dragonchild (Archbishop) on Sep 13, 2001 at 16:57 UTC
    First off, this iteration is significantly improved over the last version. This is a very good thing!

    Secondly, I'm glad you found my response useful. That you choose not to use hashes is your own business. I just call'em as I see'em.

    A few thoughts on your current solution:

    1. Good usage of my. I commend you.
    2. While chomp does work over a list, I had to look that up just now. I suspect that most Perl'ers wouldn't know that, either. I'd recommend commenting that. (Or, I'm just stupid, which is a distinct possibility!)
    3. chomp the line, then split it. It's more intuitive. Or, just do the chomp over @matched. Plus, you chomp @record when it's already been chomped above. That's redundant.
    4. You can find a better variable name than $lic. Don't shorten a variable name. You'll spend more time trying to figure out what that variable was than typing a longer name. Or, since you only use it once, why even create it?
    5. You do a grep twice through @phoneBook. It's better to get the matches, then check to see if you have any. Only one grep, which can be an expensive action.
    6. If you only want to work with the first match, why loop through all the matches? If you only want the first, use $lines[0] or, even better, discard all the matches but the first.
    7. If you're doing something across 5 records, use an arrayslice or a for-loop. It's harder to read if it's all written out. I know that if I see five explicit accesses to an array, I'm looking for a reason. Hopefully, the reason isn't that the writer doesn't know about slicing or for-loops. :)
    chomp @matched; foreach my $match (@matched) { my @record = split(/,/, $match); my ($lname, $fname) = split(/\s+/, uc($record[5])); my $name = "$lname " . substr($fname, 0, 2); if (my ($first_matched) = grep /$name/, @phoneBook) { my @line = split /,/, $first_matched; print RESULTS "$record[0],UNKNOWN,$record[$#record],SLC,"; print RESULTS "$line[$_]," for (0 .. 4); print RESULTS "$name\n"; } else { print UNMATCHED "$record[$_]," for (0,5,7,2,3,4); print UNMATCHED "UNKNOWN,\\N,\\N\n"; } }

    ------
    We are the carpenters and bricklayers of the Information Age.

    Don't go borrowing trouble. For programmers, this means Worry only about what you need to implement.

      This group is a gold mine indeed. :-) Some of the solutions, usage of print and so on you provided here as an example can be HARDLY found in any of the books and I've been through quite a few of them. I learned something new today again... :-)

      Thanks a lot!