in reply to Re^7: compare two text file line by line, how to optimise
in thread compare two text file line by line, how to optimise

File1 : 3874004 lines 6050371 words 2413 unique in trans3 File2 : 4305242 lines 6457863 words 2313 unique in gh3-3.n Time : 96 seconds

i work with an old core2duo T7100, 1.8ghz
  • Comment on Re^8: compare two text file line by line, how to optimise

Replies are listed 'Best First'.
Re^9: compare two text file line by line, how to optimise
by poj (Abbot) on Feb 28, 2016 at 16:05 UTC

    Add a print when at least 1 of the words match if that's the output you want

    while (my $line = <FIC>) { my @words = split /\s+/,lc $line; ++$uniq2{$_} for @words; $words2 += @words; ++$count2; my @match = grep $uniq1{$_}, @words; print $line if @match >= 1; # <-- add here }
    poj

      Thank you very much, this is really fast.

      Can you please explain what are the major change that did you apply and the error that i must no do, to speed up my code like you?

      Hope i can modify it later to have the output like i specified in the begening of the post: specially the number of the line that correspond to the intersecion

      If i have problems il will let you now

      You were very kind , thank you very mcuh

      Regards

        Looks like your code had 2 loops, each counting to +6 million

        foreach my $che (@b){ @aa=split(/\s/,$che); foreach my $kh (@a){ @bb=split(/\s/,$kh); for ($l=0;$l<=$#bb;$l++){ for ($m=0;$m<=$#aa;$m++){ ## this code executes 6 million x 6 million times if(($bb[$l] eq $aa[$m]) ){ .. } } } }

        But within the 6 million words, there are only few thousand different ones so your loops were checking the same word thousand of times more than required. By holding the unique words from file1 in a hash you don't have to loop through 6 million words every time to find a match with a word from file2

        poj