in reply to why text processing is fast in perl
Everyone has made a great points about Perl's true strength being 'getting stuff done.' BUT! The Computer Scientist in me wants to answer the question in a slightly different way. ;-)
Text processing can be thought of as reading, writing and matching data. Why are the reading and writing fast? I'd have to guess native bindings to standard C IO libs that can achieve the maximum native throughput with caching, buffering, etc. Why is the pattern matching so fast? ... <fade to black />
To fully understand that, you'll have to answer the question of 'What is computable?' and 'What are finite state automata (FSA/DFA/NFA)? ' Once you 'understand' that, you'll be on track to ask 'What are regular languages and how are regular expressions implemented Perl?' The jist of it: at any given time the regular expression engine is either in a match state or an unmatch state. Keep feeding it more input and that state could toggle. How do you define the 'matching states'? With regular languages of course....
Your question is pretty open ended, and I just wanted to throw some thoughts at you. ;-)
Regards,
Kurt
PS: Just to stir the pot, I'll link to this article too:
This article picks an edge case where a DFA is significantly slower than an NFA - two different ways you can implement a regular expression engine. I'm linking to this to point out that knowing what pattern you need to match could help you choose the better tool for the job.
|
|---|