Pearte has asked for the wisdom of the Perl Monks concerning the following question:
... and then I can shuffle through it like so ...%files = Recurse([$drive], {match => '\.'});
The trouble is that there are literally millions of files in the hash and as you can imagine, that's taking up a lot of memory ... don't ask. The process is able to hash the entire structure without any trouble but somewhere in loop of the program I get "Out Of Memory" errors. Undoubtedly this is the result of the aforementioned hash. I don't need to keep any element of that hash around after I have used it, so my question is, "How do I delete the entries from the hash one-by-one? In other words, how do I deallocate the memory of an entry in the hash individually or is it even possible? Obviously, I hope this would allow me to avoid the memory errors, thought I fear that might happen at the expense of a great deal of memory management overhead.foreach $dir (sort keys %files) { foreach $file (@{ $files{$dir} }) { # Do work } }
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
•Re: Hash Entry Deallocation
by merlyn (Sage) on Jun 30, 2003 at 17:48 UTC | |
|
Re: Hash Entry Deallocation
by chromatic (Archbishop) on Jun 30, 2003 at 17:49 UTC | |
|
Re: Hash Entry Deallocation
by Elian (Parson) on Jun 30, 2003 at 17:51 UTC | |
|
Re: Hash Entry Deallocation
by gjb (Vicar) on Jun 30, 2003 at 17:48 UTC | |
|
Re: Hash Entry Deallocation
by halley (Prior) on Jun 30, 2003 at 17:54 UTC | |
by Elian (Parson) on Jun 30, 2003 at 18:25 UTC | |
by halley (Prior) on Jun 30, 2003 at 20:14 UTC | |
by Elian (Parson) on Jun 30, 2003 at 21:08 UTC | |
by bart (Canon) on Sep 02, 2003 at 14:14 UTC | |
|