Ok. You can install any of the following:
Yes that is about exactly how i was doing it, i am testing borisz way now and that looks really fast!
There are two problems with the algorithm I outlined. The second problem is bigger than the first
The reason problem #2 is bigger is because I/O is extremely expensive. The following is an intermediate solution between borisz's excellent answer and your first algorithm. Even though I'm still iterating over the inner loop N times, I'm doing it in memory, which is much faster.
open my $fh1, 'first.csv'; my @first = <$fh1>; close $fh1; open my $fh2, 'second.csv'; my @second = <$fh1>; close $fh2; FIRST: foreach my $line (@first) { foreach my $otherline (@second) { if ( $line eq $otherline ) { print "Found a duplicate!\n"; next FIRST; } } }
Note that I've also added short-circuiting to stop comparing the inner-loop when I've found a dupe. This will reduce the runtime quite a bit. The hash solution is still faster, and will take less memory.
In reply to Re^3: Binary Search
by dragonchild
in thread Binary Search
by dave_pl
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |