You probably don't have to worry about that. While your program may be reading line by line (presumably because that is what you need to do in your algorithm), behind the scenes, Perl is actually buffering input and reading larger chunks of data from the disk (possibly 4 or 8 kB at a time, depending on your OS, hardware, etc.). In brief, there is almost no penalty in reading your file line by line. I am using daily Perl programs to read GB or even dozens of GB of data, if it makes sense in the functional context to read data line by line, then just do it.
| [reply] |
No, it won't affect efficiency. Although you attempt to "read" line by line, the underlying library is actually reading block by block, and the <> operator parses the line and give you one line each time. | [reply] |