in reply to Re: Sorting based on filesize.
in thread Sorting based on filesize.

a hash of filenames keyed on sizes could be used instead.

Well, when two or more files happen to have the same size, the hash will only keep one file name from that set. If that's okay, then yes, a simple hash like this could be used.

If that's not okay, then you'd need a hash of arrays:

my %hash; opendir( D, "." ); while ( $_ = readdir D ) { next unless (-f); push @{$hash{-s _}}, $_; # uses previous stat data (from -f) } for my $size ( sort {$a<=>$b} keys %hash ) { for my $file ( sort @{$hash{$size}} ) { print "$size\t$file\n"; } }
update: added comment about how many times stat is actually called on each file (i.e. just once, not twice), and wanted to point out that the Schwartzian Transform as demonstrated by pbeckingham is likely to be the best solution overall.

Replies are listed 'Best First'.
Re^3: Sorting based on filesize.
by keszler (Priest) on Jul 21, 2004 at 12:02 UTC
    Good point. This fixes it:

    @files = grep { -f } glob '* .*'; @hash{ (map { (-s $_) . ".$_"} @files) } = @files; @files = @hash{ sort { $a <=> $b } keys %hash }; print map {$_,$/} @files;
    but it's rather a moot point - the Schwartzian Transform goes a step further and eliminates the hash.