in reply to Re^3: Using indexing for faster lookup in large file
in thread Using indexing for faster lookup in large file

The 25GB file that I used isn't sorted. Your indexer program indexes it in 7 minutes but subsequently the searcher cannot find anything in it (reports 'Not found' 1000 times).

I've also tried with the file ordered but initially I could not get it to find anything. It turns out the regex used in your indexer [^(\d+),] did not match anything in the file. When I fixed that (it had to be [^(\d+);] for the OP's lines), the results were as follows (both measurements are very repeatable).

Files:

$ ls -lh hominae.txt hominae_renumbered.* -rw-rw-r--. 1 aardvark aardvark 1.5G Mar 4 13:48 hominae_renumbered.i +dx -rw-r--r--. 1 aardvark aardvark 25G Mar 4 13:17 hominae_renumbered.t +xt -rw-rw-r--. 1 aardvark aardvark 25G Feb 28 01:41 hominae.txt

hominae.txt is the 25GB file which I made by repeating the OP's 200 lines.

hominae_renumbered.txt is the same file but with the initial numbers replaced by 1 to 13M (so it is ordered).

Timing, your pointer file:

$ perl browseruk2_searcher.pl \ hominae_renumbered.txt \ hominae_renumbered.idx > bukrun; tail -n1 bukrun 'Lookup averaged 0.012486 seconds/record

Timing, database search:

# I took a join to 1000 random numbers as equivalent to 1000 searches: # table hm is the table with the 25GB data loaded into it $ echo "select * from (select (random()*131899400)::int from generate_series(1,1000)) a +s r(n) join hm on r.n = hm.id;" | psql -q | tail -n 1 Time: 19555.717 ms

So your pointer file is faster but only by a small margin ( I thought it was small, anyway; I had expected a much larger difference (of course, with the db always being the slower contender)).

Your indexing was faster too: it took only ~7 minutes to create. I forgot to time the db load but that was in the region of half an hour (could have been speeded up a bit by doing import and index separately).

Just for the record, here is also the db load:

time < hominae.txt perl -ne ' chomp; my @arr = split(/;/, $_, 2); print $arr[1], "\n"; ' \ | psql -c " drop table if exists hm; create table if not exists hm (line text, id serial primary key); copy hm (line) from stdin with (format csv, delimiter E'\t', head +er FALSE); "; testdb=# \dti+ hm* List of relations Schema | Name | Type | Owner | Table | Size --------+---------+-------+----------+-------+--------- public | hm | table | aardvark | | 29 GB public | hm_pkey | index | aardvark | hm | 2825 MB (2 rows)

Replies are listed 'Best First'.
Re^5: Using indexing for faster lookup in large file
by BrowserUk (Patriarch) on Mar 04, 2015 at 14:10 UTC

    Thanks for doing that erix.

    'Lookup averaged 0.012486 seconds/record

    Hm. Disappointed with that. I suspect a good deal of that time is down to writing the 1000 found records to the disk.

    I suspect that if you commented out the print of the records and reran it, it'd be more in line with the numbers I get here:

    for my $i ( 1 .. $N ) { my $rndRec = 1 + int rand( 160e6 ); # printf "Record $rndRec: "; my $pos = binsearch( \$idx, $rndRec ); if( $pos ) { seek DATA, $pos, 0; # printf "'%s'", scalar <DATA>; }

    The first number is the time taken to load the index. The second run is with a warm cache:

    E:\>c:\test\1118102-searcher e:30GB.dat e:30GB.idx 16.8919820785522 Lookup averaged 0.009681803 seconds/record E:\>c:\test\1118102-searcher e:30GB.dat e:30GB.idx 4.17907309532166 Lookup averaged 0.009416031 seconds/record

    Of course, if I run it on an SSD, it looks much nicer, especially as the cache warms up:

    E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 33.1236040592194 Lookup averaged 0.000902344 seconds/record E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 3.44389009475708 Lookup averaged 0.000789429 seconds/record E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 4.35790991783142 Lookup averaged 0.000551061 seconds/record E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 3.86181402206421 Lookup averaged 0.000482989 seconds/record E:\>c:\test\1118102-searcher s:30GB.dat s:30GB.idx 4.66845011711121 Lookup averaged 0.000458750 seconds/record

    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

      Did you see that I had to fix the indexer? You do not say whether you did fix it or whether your file format is perhaps different from the OP's.

      When I searched without pointers (=pointerfile made with the wrong regex) it was very fast too but I call that cheating ;)

        I saw.

        I generated a data file based up on the information the OP gave me in response to my question. 30GB/160million records = ave. 200 bytes/record. So I used:

        perl -E"printf qq[%010u,%0200u\n], $_, $_ for 1..160e6" >30GB.dat

        Which makes for easy verification that the record found matches the record searched for:

        E:\>head -2 s:30GB.dat 0000000001,00000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000001 0000000002,00000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000002 E:\>tail -2 s:30GB.dat 0159999999,00000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000159 +999999 0160000000,00000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000000 +000000000000000000000000000000000000000000000000000000000000000000160 +000000

        But I forgot to subtract the size of the record number, delimiter and EOL from the length of the data, so my 30GB.dat is actually 32GB:

        E:\>dir s:30GB.* 28/02/2015 08:21 34,560,000,000 30GB.dat 28/02/2015 09:44 1,920,000,000 30GB.idx

        So, whilst my data does not match his, the difference doesn't affect the indexing or the timing.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
        In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked