Re: search array for closest lower and higher number from another array
by BrowserUk (Patriarch) on Feb 05, 2011 at 13:33 UTC
|
Your method requires 2 full passes of the file, and then creates the problem of trying to relate the results of the two.
Far simpler would be to do a single pass, remembering each section and discarding it when you see the start of a new one. And then print the remembered section when you see the search term. You have a single pass, that on average will complete half way through the file, and no silly games trying to relate two separate searches.
Ie:
#! perl -sw
use strict;
my $term = shift;
my @section;
while( <DATA> ) {
@section = () if /Header/;
push @section, $_;
if( /$term/ ) {
print for @section;
print while defined( $_ = <DATA> ) and not /^Header/;
last;
}
}
__DATA__
Header 1
just some junk
just some junk
just some junk
just some junk
just some junk
Header 2
just some junk
just some junk
just some junk
just some junk
just some junk
Header 3
just some junk
just some junk
just some junk
this is a term
just some junk
just some junk
Header 4
just some junk
just some junk
just some junk
just some junk
just some junk
Header 5
just some junk
just some junk
a different term
just some junk
just some junk
just some junk
A couple of sample runs: c:\test>junk34 term
Header 3
just some junk
just some junk
just some junk
this is a term
just some junk
just some junk
c:\test>junk34 different
Header 5
just some junk
just some junk
a different term
just some junk
just some junk
just some junk
Note that this stops looking the first time it matches the term. To display all sections containing the term, just delete the last statement.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
Re: search array for closest lower and higher number from another array
by CountZero (Bishop) on Feb 05, 2011 at 13:22 UTC
|
use Modern::Perl;
use Search::Binary;
#fake lists of headers and matches
my @headers = map {state $header; $header += int(rand(10)) + 1} (1 ..
+10000);
my @matches = map {int(rand($headers[-1])+1)} (1 .. 5000);
my $min = 0;
my $max = @headers - 1;
my $read = sub {
my ($handle, $val, $position) = @_;
state $last_position;
if ($position) {
$last_position = $position;
return $val <=> $headers[$position], $position;
}
else {
$last_position++;
return $val <=> $headers[$last_position], $last_position;
}
};
my $start_time = time;
for my $match (@matches) {
my $pos = binary_search($min, $max, $match, $read, undef, 10);
$pos++ if $headers[$pos] == $match;
say "$match is between $headers[$pos - 1] and $headers[$pos] (posi
+tion: $pos)";
}
say "That took me ", time - $start_time, " seconds.";
CountZero A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James
| [reply] [d/l] |
Re: search array for closest lower and higher number from another array
by bigbot (Beadle) on Feb 05, 2011 at 13:53 UTC
|
What I am trying to do is leverage the speed of grep to complete the task. The data files are huge and I have found that grep is significantly faster than even running the files through an empty Perl loop.
The fastest current implementation I have found is using grep with the context (50) option to get 50 lines around the match, then use a loop in Perl just as you have shown to parse that data. I was wondering if this line idea might even be faster and more efficient, if I can only find a better way to get the line numbers of headers around the match.
| [reply] |
|
|
#!/usr/bin/perl
use strict;
use Benchmark qw(cmpthese);
my $fileName = "data2";
my $string = "test";
print "case1 finds ", &case1, " matches \n";
print "case2 finds ", &case2, " matches \n";
cmpthese (10, {
case0 => sub {&case0},
case1 => sub {&case1},
case2 => sub {&case2},
},
);
sub case0 {
open(my $file, '<', "$fileName") or die $!;
while (<$file>) {
}
}
sub case1 {
open(my $file, '<', "$fileName") or die $!;
my $matchCount = 0;
while (<$file>) {
$matchCount++ if ($_ =~ /$string/o);
}
return $matchCount;
}
sub case2 {
my $matchCount = `grep -c -F "$string" $fileName`;
chomp $matchCount;
return $matchCount;
}
case1 finds 354 matches
case2 finds 354 matches
s/iter case1 emptyLoop case2
case1 8.08 -- -36% -92%
case0 5.16 57% -- -88%
case2 0.632 1178% 716% --
| [reply] [d/l] |
|
|
[15:00:05.53] c:\test>perl -nle"/234/ && ++$c }{ print $c" 1GB.dat
6791
[15:00:17.31]
## 11.78 seconds
[15:01:05.67] c:\test>grep -c -F 234 1GB.dat
6791
[15:01:14.28] c:\test>
## 8.61 seconds
I don't see much wrong with your benchmark, so I wonder if that means your Perl is much slower than mine or my grep slower than yours?
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] |
|
|
| [reply] |
|
|
|
|
|
|
|
|
|
|
|
Some thoughts on your benchmark! Depending on the operating system you're working on, the actual results of your benchmark could be very different. If your benchmark was run on a *nix system, your first call to the subroutine &case1:
print "case1 finds ", &case1, " matches \n";
causes the file to be read and cached by the *nix system. I wasn't sure how perl would or would not benefit from this, but a *nix grep will be able to do pattern matching on the file in cached memory. grep doesn't even have to do a memory to memory copy/move. ( I didn't look at the code, so grep may be doing the memory to memory copy/move. ) I ran you're benchmark on an AIX system, and the results were basically the same as what you saw before. I then modified you're script to call &case2 first, and then &case1 and then &case0 (only once, Benchmark complained!) on a new and un-cached file. The result was that &case0 was the fastest, followed by &case1 and the slowest was &case2(grep). I ran this script on OpenSUSE with similar results. It does appear that perl does get benefit of the caching. If I ran the test again, grep was the winner! If you're used a *nix system, I hope this gives some idea of why grep looked so much faster than perl. Note: It's faster to work in memory than on disk.
Further note: You may have to restart the system to guarantee the file isn't cached already. I made this mistake the first time by using a large .gz file that I unzipped, which caused the zipped and unzipped files to be cached.
Thank you
"Well done is better than well said." - Benjamin Franklin
| [reply] [d/l] |
|
|
|
|
|
|
|
| [reply] |
Re: search array for closest lower and higher number from another array
by locked_user sundialsvc4 (Abbot) on Feb 07, 2011 at 18:02 UTC
|
Thinking completely outside of the box here... what about SQLite? This will put your data “in a single file” (no database servers required), but also to allow you to index it, and to retrieve the data you’re looking for by means of SQL queries. I can tell you from plentious experience that this engine is blisteringly fast, it runs on absolutely everything, and it can lift a lot of work completely off your shoulders. If you find yourself trying to cobble up an algorithm to do something (as in this case), perhaps step back and see if a fundamentally different approach like this one might be better. Perchance, much better...
| |
|
|
That sounds really useful. I'm not sure if I can use it at work, but certainly worth looking at. Thanks.
| [reply] |