Yes. I do agree completely. This opens the realm of stream regexps and would facilitate greatly the construction of regexp-based tokenizer (scalar m//gc) which need to process their input in chunks. Currently you need to resort to contorted hacks to do stream tokenizing, a pity as this limits the implementation of generic parser generators in pure Perl.
What is needed is a way to keep the state of the regexp engine at the end of the buffer -- end-of-buffer-match case--, so that when you add another chunk, the engine does not start again from the beginning. Considering all the goodies added by demerphq, maybe there is hope ;) to see something soon.
Also I'd like to be able to switch to a smaller but faster regexp implementation just for a block. Or maybe be able to turn off parts of the main engine -- locally -- that I know I am not going to use in a given block (supposing that doing so gives extra speed of course).
cheers --stephanIn reply to Re^2: what would you like to see in perl5.12?
by sgt
in thread what would you like to see in perl5.12?
by ysth
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |