You need a sliding buffer--a supersearch for that term will turn up various implementations.
Here's a simple one implemented using an array of lines:
#! perl -slw use strict; my @lines; my %seen; while( <DATA> ) { push @lines, $_; my $buf = join '', @lines; if( $buf =~ /(.{30}these\s+are\s+my\s+keywords.{30})/sm ) { print "'$1'" unless $seen{ $1 }; ++$seen{ $1 }; } shift @lines if @lines > 5; } __END__ Here is my text file I want to save a bunch of charcaters before the keywords for example the keywords might be the phrase: these are my keywords I want to save a bunch of characters after the keywords too so I have context The keywords may appear multiple times in any given file and may span across lines like so: these are my keywords. This is one reason I was using slurp instead of reading in line by line
Which produces:
C:\test>junk 'eywords might be the phrase: these are my keywords I want to save a bunch of ch' 'y span across lines like so: these are my keywords. This is one reason I was u'
You would probably want to make the context at either end optional so you don't miss matches at the start or end of the file where there may not be enough context to match.
In reply to Re: use regular expressions across multiple lines from a very large input file
by BrowserUk
in thread use regular expressions across multiple lines from a very large input file
by rizzy
For: | Use: | ||
& | & | ||
< | < | ||
> | > | ||
[ | [ | ||
] | ] |