in reply to Re^2: Nested greps w/ Perl
in thread Nested greps w/ Perl

Huh?

I have thousands of search terms in one file to search against a database. I'm simply trying to find out how many times each search term appears in the database

Does this mean that you're dumping out the content of a SQL database to file and then using grep to search the data?

You realise this defeats the entire purpose of having a database, right? Assuming your database is correctly indexed (and you have sufficient RAM), you should be able to run a query that gives you exactly what you want in a fraction of the time it takes to even dump out the entire table(s) for external processing

Replies are listed 'Best First'.
Re^4: Nested greps w/ Perl
by wackattack (Sexton) on Dec 20, 2016 at 16:21 UTC
    It's a flat file. (text document). Although I'm wondering if I should put the file into a SQLITE database I can't but help think I should be able to do this query rather quickly.

    All I'm doing is asking

    How many Z's does jake have?
    how many Z's does lisa have?
    how many Z's does tommy have?

    And doing that for 8 million people.
      If I were going to do what you say, I might write something like:
      #!/usr/bin/perl use strict; use warnings; use 5.10.0; my %count; while (<DATA>) { if (/^(\S+)\s+([A-Z])$/) { $count{$1}{$2}++; } else { warn "Regular expression failed on $_"; } } for my $name (sort keys %count) { if (exists $count{$name}{Z}) { say "$name $count{$name}{Z}" } } __DATA__ Tommy Z Tommy Z Chris Z Chris B Chris Z Jake Z Jake Y
      Important elements are keeping the line parsing code tight and minimizing the global memory footprint. Post more realistic data, and we can help refine regular expressions. Also note you are optimizing without profiling. In your circumstance, I will usually grab the first 1000 lines, and test my code with Devel::NYTProf to figure out if I'm doing something silly.

      If you run something like the above on your file, the code should take about as long as just running

      #!/usr/bin/perl use strict; use warnings; use 5.10.0; my $count; while (<DATA>) { $count++ } say $count;
      If just counting lines in this way is too slow for your need, you'll need to use the window technique I and LanX have mentioned.

      #11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.

        I got it. Thank you!!!!!!!!!!!!!!!!
      And doing that for 8 million people.

      Is that 8 million records in the file? Or 8 million people with each having multiple records in the file?

      If the former, (roughly) how many records per person? If the latter, what is the total number of records in the file?

      Update: Given your test file format with 8 million lines, this one liner does the job in around 35 seconds:

      [19:07:51.13] C:\test>wc -l 1178116.dat 8000000 1178116.dat [19:07:54.92] C:\test>head 1178116.dat ihpfgx Z fxbkfh Z kqektt B zxburh Z zpzafy Z nvamqp Z umpeky Z hyfldc B qdapmk Z ynlfhg Z [19:08:07.28] C:\test>perl -anle"$F[1] eq 'Z' and ++$h{$F[0]} }{ print + join ' ', $_, $h{ $_ } for keys %h" 1178116.dat >null [19:08:42.87] C:\test>

      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". The enemy of (IT) success is complexity.
      In the absence of evidence, opinion is indistinguishable from prejudice.