You could use the core File::Find module and the stat function to scan a directory looking for the file taking up most disk blocks (512 bytes on many file systems though not all). Running the following code in my "Downloads" directory finds that a large CentOS ISO is the guilty culprit.
$ perl -MFile::Find -Mstrict -Mwarnings -E ' my %largest = ( name => q{}, size => 0 ); find( sub { my $blocks = ( stat )[ 12 ]; do { $largest{ name } = $File::Find::name; $largest{ size } = $blocks; } if $blocks > $largest{ size }; }, q{.} ); say qq{$largest{ name } - $largest{ size } blocks};' CentOS-5.10-x86_64-bin-DVD-1of2.iso - 9125976 blocks $
I hope this is helpful.
Update: Substituted $File::Find::name for $_ to store the full path rather than just the file name.
Cheers,
JohnGG
In reply to Re: find biggest file and use awk
by johngg
in thread find biggest file and use awk
by mxtime
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |