Depending on your array sizes, you may get more efficiency from a splice instead of your grep, since you need to create a new array and copy nearly all the old file names. That code might look something like
sub remove_from_dtgs { my ($dtg,$file) = @_; for my $i (reverse 0 .. $#files) { splice @{$dtgs{$dtg}}, $i, 1 if $dtgs{$dtg}[$i] eq $file; } delete $dtgs{$dtg} unless @{$dtgs{$dtg}}; }
If you know lists are unique (no repeats) and that this is the only routine that modifies the arrays, you can add some Loop Control and do a little better:
sub remove_from_dtgs { my ($dtg,$file) = @_; for my $i (reverse 0 .. $#files) { if ($dtgs{$dtg}[$i] eq $file) { splice @{$dtgs{$dtg}}, $i, 1; delete $dtgs{$dtg} unless @{$dtgs{$dtg}}; last; } } }
Note in the original, you missed parentheses in your test, and that all logical tests are scalar context, so the scalar is unnecessary.
Of course, this is an optimization, so make sure to actually test (perhaps with Devel::NYTProf) rather than guess at what's slow.
Update: Or, of course, given a uniqueness constraint, you could just use a hash:
open(my $dtg_file, "<", $infile) or die "Unable to open $infile: $!\n" +; while(<$dtg_file>) { chomp; my ($dtg,@files) = split /:/; $dtgs{$dtg}{$_}++ for @files; } close $dtg_file; sub remove_from_dtgs { my ($dtg,$file) = @_; delete $dtgs{$dtg}{$file}; delete $dtgs{$dtg} if !keys %{$dtgs{$dtg}} }
#11929 First ask yourself `How would I do this without a computer?' Then have the computer do it the same way.
In reply to Re: Avoiding Memory Leaks
by kennethk
in thread Avoiding Memory Leaks
by wink
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |