laziness, impatience, and hubris | |
PerlMonks |
Re: use regular expressions across multiple lines from a very large input fileby LanX (Saint) |
on Dec 05, 2010 at 18:29 UTC ( [id://875504]=note: print w/replies, xml ) | Need Help?? |
Hi I will only sketch an algorithm and leave the programming to you. I think you should read and process text chunks of size n, e.g. 1024 or 4096 bytes. ² Whenever you process one chunk you need to append the m first bytes of the next chunk with m=200+l and l the number of characters of your keyword string minus 1, that is 21 for "these are my keywords". Like this your regex will match all occurrences where at least the first character of the keyword string is still in the chunk. Of course you need to normalize the chunks and keywords by replacing s/s+/ /g.¹ If your regex is too complicated to be normalized you can still do it by joining two - reasonably big (!)³ successive chunks, but you need either to memorize the match position to exclude duplicated hits or change the regex to only allow matches starting within the first chunk. (e.g. by checking pos) Cheers Rolf 2) here efficiency depends on the block size of your filesystem. see seek for how to read chunks. 3) a chunk must be bigger than the size of the longest possible match. Now quantifiers like \s+ indicate potentially infinite long matches. Are they really wanted??? Either make a reasonable limit like \s{,20} or you have to normalize your chunks by replacing s/\s+/ /g.
In Section
Seekers of Perl Wisdom
|
|