Use a while loop rather than a foreach loop. foreach will read the entire file before anything happens because it provides a list context to the diamond operator (<DATASOURCE>). And you can let perl keep track of the line number for you because it's in the special variable $.:
#!/usr/bin/perl use strict; use warnings; my $datasource ="C:\\somelargetextfile.txt"; open DATASOURCE, "$datasource" or die "Can't open $datasource: $!"; while (my $line = <DATASOURCE>){ print "$. - $line"; } close (DATASOURCE);
If this is all that your code is doing then I assume you're running on Windows because if you were on a unix, you could just use the cat(1) command to do something similar:
cat -n my_large_text_file.txt
Update: Ah, I see from your other post that you're inserting info into a database. Other than not reading the entire file into memory there isn't much else that you're going to be able to do to speed up the loop.
Why do you care how fast this program runs? Will it be run often? Must it finish within some time constraint so that it won't hold up a larger process?
In reply to Re: speeding up script that uses filehandles
by duff
in thread speeding up script that uses filehandles
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |