Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re^5: Using indexing for faster lookup in large file

by BrowserUk (Patriarch)
on Mar 04, 2015 at 14:10 UTC ( [id://1118747]=note: print w/replies, xml ) Need Help??


in reply to Re^4: Using indexing for faster lookup in large file
in thread Using indexing for faster lookup in large file

Thanks for doing that erix.

'Lookup averaged 0.012486 seconds/record

Hm. Disappointed with that. I suspect a good deal of that time is down to writing the 1000 found records to the disk.

I suspect that if you commented out the print of the records and reran it, it'd be more in line with the numbers I get here:

for my $i ( 1 .. $N ) { my $rndRec = 1 + int rand( 160e6 ); # printf "Record $rndRec: "; my $pos = binsearch( \$idx, $rndRec ); if( $pos ) { seek DATA, $pos, 0; # printf "'%s'", scalar <DATA>; }

The first number is the time taken to load the index. The second run is with a warm cache:

E:\>c:\test\1118102-searcher e:30GB.dat e:30GB.idx 16.8919820785522 Lookup averaged 0.009681803 seconds/record E:\>c:\test\1118102-searcher e:30GB.dat e:30GB.idx 4.17907309532166 Lookup averaged 0.009416031 seconds/record

Of course, if I run it on an SSD, it looks much nicer, especially as the cache warms up:

E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 33.1236040592194 Lookup averaged 0.000902344 seconds/record E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 3.44389009475708 Lookup averaged 0.000789429 seconds/record E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 4.35790991783142 Lookup averaged 0.000551061 seconds/record E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 3.86181402206421 Lookup averaged 0.000482989 seconds/record E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 4.66845011711121 Lookup averaged 0.000458750 seconds/record

With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

Replies are listed 'Best First'.
Re^6: Using indexing for faster lookup in large file
by erix (Prior) on Mar 04, 2015 at 14:16 UTC

    Did you see that I had to fix the indexer? You do not say whether you did fix it or whether your file format is perhaps different from the OP's.

    When I searched without pointers (=pointerfile made with the wrong regex) it was very fast too but I call that cheating ;)

      I saw.

      I generated a data file based up on the information the OP gave me in response to my question. 30GB/160million records = ave. 200 bytes/record. So I used:

      perl -E"printf qq[%010u,%0200u\n], $_, $_ for 1..160e6" >30GB.dat

      Which makes for easy verification that the record found matches the record searched for:

      E:\>head -2 s:30GB.dat 0000000001,00000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000001 0000000002,00000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000002 E:\>tail -2 s:30GB.dat 0159999999,00000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000159 +999999 0160000000,00000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000160 +000000

      But I forgot to subtract the size of the record number, delimiter and EOL from the length of the data, so my 30GB.dat is actually 32GB:

      E:\>dir s:30GB.* 28/02/2015 08:21 34,560,000,000 30GB.dat 28/02/2015 09:44 1,920,000,000 30GB.idx

      So, whilst my data does not match his, the difference doesn't affect the indexing or the timing.


      With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
      In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1118747]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others having an uproarious good time at the Monastery: (2)
As of 2024-04-24 15:16 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found