in reply to Re: Using indexing for faster lookup in large file
in thread Using indexing for faster lookup in large file

Thanks for a great reply. This was exactly what I was looking for.

It is quite time consuming to do the indexing, currently is taken > 2 hours (still running), but the lookups seem much faster. Testing it out on a smaller dataset, there is a 3x time reduction vs grep, and I'll have to see how that scales with the full dataset.

I did some modifications to the code, adapting it to the Lucy::Simple module, so right now it looks like this:

indexer.pl
#!/usr/bin/perl use 5.014; use strictures; use Lucy::Simple; my $index = $ARGV[0]; system("mkdir -p $index"); my $lucy = Lucy::Simple->new( path => $index, language => 'en', ); open DATA, '<',$ARGV[1]; while (my $line = <DATA>) { my ($id,$taxid,$text) = split(/;/, $line, 3); $lucy->add_doc( {id => $id, content => $text} ); }

query.pl:
#!/usr/bin/perl use 5.014; use strictures; use Lucy::Simple; my $index = Lucy::Simple->new( path => $ARGV[0], language => 'en', ); my $query_string = $ARGV[1]; my $total_hits = $index->search(query => $query_string); #print "Total hits: $total_hits\n"; while ( my $hit = $index->next ) { print "$hit->{id}\t"; print "$hit->{content}"; }

All the perlmonks XP is your birthday reward :)