Is there a way I can speed up the process in PERL
Without changing the approach you can speed it up by losing the unused $line scalar and the pointless chomp. Change that loop to:
while (<$bigfile>) { $lctr++; }
That should buy you a few percent. Beyond that it would be better not to process the file line by line but rather block by block with a variable (ie. tunable) block size. Maybe start with 16MB or so. Then just count the newlines in each block once it is in memory.
BTW, did you spot the bug on this line?
open my $outfile,">",$ARGV[1] or die "Error: Could not open output file $ARGV[0]:$!";
In reply to Re: Faster file read, text search and replace
by hippo
in thread Faster file read, text search and replace
by sabas
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |