Re: Faster grep in a huge file(10 million)
by BrowserUk (Patriarch) on May 10, 2013 at 23:32 UTC
|
If your 12 million records average less than a couple of kbytes each (ie. if the size of the records file is less than your available memory), I'd just load the entire file into memory as a single string and the read the circuits file one line at a time and use index to see if it is in the records:
#! perl -slw
use strict;
my $records;
{
local( @ARGV, $/ ) = $ARGV[0];
$records = <>;
}
open CIRCUITS, '<', 'circuits' or die $!;
while( <CIRCUITS> ) {
unless( 1+ index $records, $_ ) {
print;
}
}
__END__
C:\test>1033014 records circuits >notfound
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
|
|
| [reply] |
|
|
I'd draw your attention to the first word of both of the sentences you quoted; and also to both the id est; and contraction that follows it.
If the OPs circumstances do not comply with either of those two criteria; then *I* wouldn't use this approach.
But, his records might only be 80 characters in size (ie.<1GB of data); and if I were purchasing my next machine right now, I wouldn't consider anything with less than 8GB, preferably 16; and I'd also be looking at putting in a SSD configured to hold my swap partition effectively giving me 64GB (or 128GB or 256GB) of extended memory that is a couple of orders of magnitude faster than disk.
So then you are trading 2x O(N Log N) processes + merge at disk speeds; against a single O(N2) process at ram speed. Without the OP clarifying the actual volumes of data involved; there is no way to make a valid assessment of the trade-offs.
Also, if they are free-format text records -- ie. the key is not in a fixed position; or there might be multiple or no keys per record -- sorting them may not even be an option.
Equally, the OP mentioned 'patterns'; if they are patterns in the regex sense of the word, that would exclude using a hash. And, if you had to search the records to locate the embedded keys in order to build a hash, you've done 90% of the work of the in-memory method, before you've started to actually use the hash.
The bottom line is, I offered just one more alternative that might make sense -- or not -- given the OPs actual data; and it is up to them to decide which best fits.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
Re: Faster grep in a huge file(10 million)
by educated_foo (Vicar) on May 10, 2013 at 19:52 UTC
|
| [reply] |
Re: Faster grep in a huge file(10 million)
by InfiniteSilence (Curate) on May 10, 2013 at 21:05 UTC
|
Is there a problem with writing all of the data to a relational database and simply doing something like,
SELECT tbl2.* FROM tbl2 WHERE tbl2.id NOT IN (SELECT tbl1.id FROM tbl1
+);
?
Celebrate Intellectual Diversity
| [reply] [d/l] |
Re: Faster grep in a huge file(10 million)
by Laurent_R (Canon) on May 10, 2013 at 21:56 UTC
|
5 million circuit names is not that huge (at least not by my standards), it is just big. It should fit in a hash in memory I think. And if it fits in memory, it is a very simple problem.
| [reply] |
Re: Faster grep in a huge file(10 million)
by thewebsi (Scribe) on May 10, 2013 at 19:56 UTC
|
Sort the files first, then it's easy.
| [reply] |
|
|
Thanks for the reply. I tried this but, doesn't seem to help . :(
#!/usr/bin/perl
use strict;
use warnings;
my %file2;
open my $file2, '<', '/home/match_miss' or die "Couldn't open file2: $
+!";
while ( my $line = <$file2> ) {
++$file2{$line};
}
open my $file1, '<', '/home/BIG_FILE' or die "Couldn't open file1: $!"
+;
while ( my $line = <$file1> ) {
print $line if defined $file2{$line};
}
| [reply] [d/l] |
|
|
| [reply] [d/l] |