(If I were trying to reduce the quantity of files, and/or organize the files by relative size, I'd sort them into a few zip archive files. Maybe combined pdfs are very handy and flexible and quick to open and search through with easy random access - I don't know - but I know that this is true for zip files.)
Second, it looks like your approach will be doing a lot of closing and reopening of those few large output files, and I'd worry that this might lead to a lot of thrashing, especially as the output files get bigger and you still have thousands more input files to append (and sort?). It would make more sense to scan all the inputs first, use a hash of arrays (or hash of hashes) to build up an overview of the inventory, and then create each of the outputs with a single, tightly nested loop - that is something like:
my %lists_by_size; # get list of file names into @inp_files for my $file ( @inp_files ) { my $group = get_page_count( $file ); push @{$lists_by_size{$group}}, $file; } for my $group ( keys %lists_by_size ) { # open output pdf (or output zip file) for this group for my $file ( @{$lists_by_size{$group}} ) { # use sort here if y +ou like # append this file to the output } # close the output }
In reply to Re: Am I on the right track?
by graff
in thread Am I on the right track?
by Pharazon
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |