To begin with, I have directories with thousands of files, such as etexts. Many of them are duplicates, but IN DIFFERENT FILE FORMATS.
For example, a directory with 10 thousand files /FOO - might have:
baz.txt
baz.epub
baz.doc
baz.pdf
bar.epub
boo.epub
boo.txt
Now, my first priority is, say, epub.
I have written a script that:
1. Parses all the files in the current directory (or any specified one) and dumps their file.ext names into an array, say @allfiles.
2. I also parse individual extensions into their own arrays, say - @txt @pdf @pdf @epub
3. Then I parse an externsion array, say @pdf, against @allfiles, and only select the pdf filenames that have a corresponding epub extension, and move them to a subdir.
The script works, but has raises some questions:
a. It is glacially slow. It takes a few hours to parse 10k+ files. Is there a more effcient method? Or a module that can handle this type of operation. I have to humbly admit, I do not even know what this type of process is called. I am probably overlooking something painfully obvious here.
b. In the extension parsing process I filter out extraneus characters and spaces so that $a exactly equals $b.
if ($a eq $b) - works, as expected.
But (if $a =~ /$b/ ) does not. Is there something I am overlooking here? I prefer matching to equality, as some 'dups' might have minor variations in characters.
Many thanks n advance :)
K
In reply to Duplicates in Directories by kel
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |