This Lucy-code is really nice and fast, thanks.
However, it doesn't work as easy as-is for large files: I let it run for 3 days on a 25 GB file (just the OP-provided 200 lines, repeated) (on an admittedly slowish AMD 8120, 8 GB memory). I started it last sunday, today I had enough and broke it off.
2015.03.01 09:35:49 aardvark@xxxx:~ $ time ./lucy_big.pl ^C real 4264m3.903s user 4205m27.322s sys 8m5.160s 2015.03.04 08:39:58 aardvark@xxxx:
There is probably a way to do this with better settings...
A postgres variant, loading the same full 25 GB file, was rocksolid and searched reasonable well (~20 ms per search, IIRC (had to delete it for diskspace: size in db: 29 GB)).
Having said that, a pointer-file solution similar to one of the things BrowserUK posted would be my first choice (although I'd likely just use grep -b).
But undoubtedly I'll be able to use your Lucy code usefully (albeit on smaller files), so thanks.
I'd like to hear from the OP how he fared with Lucy and his large file...
In reply to Re^2: Using indexing for faster lookup in large file
by erix
in thread Using indexing for faster lookup in large file
by anli_
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |