The solutions others have pointed out are the best you can do if you are reading a Unix-type stream file. (If you are on Unix or Windows, this is the normal type of file.) Some platforms, however, have other types of files (and yes, other types of text files) that are stored differently at the filesystem level in such a way that seeking to a specific line is much less expensive. The index solution is an attempt to fake this, but it's less efficient than the real thing. I don't know how well Perl supports those filetypes, though, and in any case it doesn't help you if you're on Unix or Windows. But if you're developing something that's going to be deployed as a "solution" and you haven't solidified what platform you're going to run it on yet, this is something to consider.
There are also other ways to fake it, besides the index method. If you can pin down the maximum line length, for example, and if you have control over the thing that creates these files, you could use a fixed "record" (line) length, which makes seeking to a specific line very fast (O(1) time). It also makes your file take up more total space, up to twice as much or more, depending on the ratio between typical and maximum line length, but depending on what you're doing it may be a good deal cheaper to get a bigger disk than a faster one, so this could be a good tradeoff.
$;=sub{$/};@;=map{my($a,$b)=($_,$;);$;=sub{$a.$b->()}} split//,".rekcah lreP rehtona tsuJ";$\=$ ;->();print$/
In reply to Re: Fast way to read from file
by jonadab
in thread Fast way to read from file
by Hena
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |