The special perl variable $. tells you the current line of the file you're reading. Hence, at the end of the file, it gives you the total number of lines:
while (<FILE>){}; print "length = $.";
As an aside, it's typical in perl to reserve all uppercase variable names for file handles and such and not ordinary variables as you do. IMHO it makes your code easier to read.

Update:In light of gbarr's post below I did some benchmarking. gbarr is correct that snarfing the file in chunks is much faster. I tested quite a few large files (>1 million lines) and found that in most cases, reading 100K chunks and counting linefeeds is several times faster than reading line by line. In a few cases the two approaches were close. So, I definitely recommend gbarr's solution. Benchmark code follows:

#!/usr/bin/perl use warnings; use strict; use Benchmark; timethese(100, { 'line_by_line' => q{ open(INFILE,'test.dat') or die "open: $!"; while(<INFILE>){}; close(INFILE); }, 'chunk' => q{ $count = 0; open(INFILE,'test.dat') or die "open: $!"; local $/=\1024000; while(<INFILE>) { $count += tr/\n// } close(INFILE); } });
On some sample data, this produces the following:
Benchmark: timing 100 iterations of chunk, line_by_line... chunk: 3 wallclock secs ( 1.58 usr + 0.70 sys = 2.28 CPU) @ 43 +.80/s (n=100) line_by_line: 12 wallclock secs ( 9.54 usr + 0.72 sys = 10.27 CP +U) @ 9.74/s (n=100)

Update #2 After further playing around and looking at other nodes on the same topic it seems that sysread is even faster still. Doing new benchmarks utilizing the following code

open INFILE,'test.dat' or die "open: $!"; my $count = 0; while (sysread(INFILE,$_,102400)) { $count += tr/\n//; }
yields
/home/rhet/misc> ./testfcount.pl Benchmark: timing 100 iterations of chunk, line_by_line, sysread... chunk: 12 wallclock secs ( 8.00 usr + 3.52 sys = 11.53 CPU) @ 8 +.68/s (n=100) line_by_line: 154 wallclock secs (149.20 usr + 3.29 sys = 152.50 + CPU) @ 0.66/s (n=100) sysread: 6 wallclock secs ( 4.09 usr + 1.62 sys = 5.71 CPU) @ +17.52/s (n=100)
I tried a variety of files and was able to find certain files that made one or the other faster but in general sysread was usually the fastest and quite often a lot faster. On files with really short lines the line_by_line method was VERY slow but on files with much larger lines, the line_by_line method was often faster than the other two. In general though, it looks like sysread is your best bet. You could probably make further optimizations by changing the size of the block you read with sysread but these would likely be dependent on a particular configuration of a particular platform.

In reply to (RhetTbull) Re: Looking for a better way to get the number of lines in a file... by RhetTbull
in thread Looking for a better way to get the number of lines in a file... by coec

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.