in reply to Archive::Tar extract_file

I'm pleased you found the module review, and I'm sorry it wasn't more clear. (Let me know if you have any suggestions for changes/additions.)

If you are always extracting the full content of each tar file to some specific place, then this might be a more efficient approach -- note that I'm assuming the tar file name ends in ".tar" or ".tar.gz" or ".tgz", and I'm stripping that off when naming the directory under EXTRACTED -- personally, I think that having directories named foo.tar.gz and so on is a bad idea:

foreach ( @sorted ) { my $tar = Archive::Tar->new($_); ( my $dest = "./EXTRACTED/$_" ) =~ s/\.t(?:ar(?:.gz|gz)//; mkdir $dest unless ( -d $dest ); chdir $dest or die "mkdir/chdir failed on $dest: $!"; $tar->extract(); # no params: extract full content to cwd chdir "../.."; # return to original cwd }
If you really wanted to extract each file individually for some reason (e.g. to divvy them out different extraction paths depending on some feature), then your loop over files would work better this way:
for my $file ( $tar->get_files ) { my $dataref = $file->get_content_by_ref; # open a suitable output file and print $$dataref to it. # You can use $file->name to see the tarred path and # make subdirs as you see fit. }
Of course, if the tar files are small and/or there are few files involved, you probably won't notice a difference relative to the "list_files() ... extract_file()" approach. (I just noticed it on tar files containing thousands of data files.)