It processes a 50,000 line file in about 70 milliseconds. ... I've heard Perl is a performant scripting language.
With those kinds of execution times, personally I wouldn't even worry about it to begin with. But just to demonstrate that Perl isn't going to be a lot slower, here's an example benchmark from my system (somewhat simple, just average execution time over a couple of runs). The first row is your code (unchanged), and the following three rows are the three pieces of code I posted:
| Input file size | |||
|---|---|---|---|
| Solution | 12 lines | 120_000 lines | 1_200_000 lines |
| awk | 26ms | 66ms | 417ms |
| awk to Perl | 26ms | 105ms | 782ms |
| First example | 27ms | 75ms | 520ms |
| Second example | 27ms | 72ms | 450ms |
This is Perl 5.24.1 on Linux. As you can see, although awk might have a minor advantage, I don't think you have anything to worry about in terms of speed when it comes to your use case. If it ever came to be an issue, there are lots of ways to optimize code in Perl (e.g. Benchmark and profilers like Devel::NYTProf).
In reply to Re: Filtering certain multi-line patterns from a file
by haukex
in thread Filtering certain multi-line patterns from a file
by ivanbrennan
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |