weini has asked for the wisdom of the Perl Monks concerning the following question:
So I wrote a script to
<<walk through the file tree>>
<<search for files 'newer' than n days>>
<<print results to an HTML-file>>.
The main thing is done using File::Find and the function
So far, so good. But the script takes some five hours to finish (due to the size of the fileshare and network traffic). Now I'm asking for your input how to do better and faster.find(\&wanted, $dir); sub wanted { if ((-f "$_") && (-M "$_" < $age )) { # get stats and push $_ in array } }
I'd like to run the script regular to keep the info up to date. A possible solutiong might be creating a database and keep looping through the fileshare while updating the db if new files exist or if a file has changed after the last visit.
Thanx for any other suggestions!
BTW: Yes, I've read maintain control over very many files.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Find new files in tree
by grinder (Bishop) on Apr 29, 2002 at 10:42 UTC | |
|
Re: Find new files in tree
by belg4mit (Prior) on Apr 29, 2002 at 12:26 UTC | |
|
Re: Find new files in tree
by particle (Vicar) on Apr 29, 2002 at 13:23 UTC | |
by belg4mit (Prior) on Apr 29, 2002 at 14:02 UTC | |
by particle (Vicar) on Apr 29, 2002 at 15:02 UTC | |
by belg4mit (Prior) on Apr 29, 2002 at 19:34 UTC | |
|
Re: Find new files in tree
by Rich36 (Chaplain) on Apr 29, 2002 at 13:58 UTC | |
| A reply falls below the community's threshold of quality. You may see it by logging in. |