in reply to Faster file read, text search and replace
Is there a way I can speed up the process in PERL
Without changing the approach you can speed it up by losing the unused $line scalar and the pointless chomp. Change that loop to:
while (<$bigfile>) { $lctr++; }
That should buy you a few percent. Beyond that it would be better not to process the file line by line but rather block by block with a variable (ie. tunable) block size. Maybe start with 16MB or so. Then just count the newlines in each block once it is in memory.
BTW, did you spot the bug on this line?
open my $outfile,">",$ARGV[1] or die "Error: Could not open output file $ARGV[0]:$!";
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Faster file read, text search and replace
by sabas (Acolyte) on Feb 28, 2018 at 20:08 UTC |