I have to iterate over huge text massives with information about different files, actually about phonetical information about human speech.
The file is ordered, so i try to iterate over it until there is a change of file name, then i want to deal with the information collected about that one file
does my use of splice of the array of hashes free up allocated memory used for "one" file, so that i can use it for the next one? Or will the hashrefs be orphaned?
Here's a sample code that uses the same technique:
use strict; use warnings; my @words = (); my @cvs = (); # push some hash data onto the arrays, in real life this would be # huge files with phonetical transcripton, not VC toilets push @cvs, { file => 1, text => 'v' }; push @cvs, { file => 1, text => 'c' }; push @cvs, { file => 1, text => 'c' }; push @cvs, { file => 2, text => 'c' }; push @cvs, { file => 2, text => 'v' }; push @cvs, { file => 2, text => 'c' }; push @cvs, { file => 2, text => 'c' }; push @words, { file => 1, text => 'üks' }; push @words, { file => 2, text => 'kaks' }; my $cv_as_text; my @cvs_to_delete = (); # for each word we have to find its consistents for my $w_i (0 .. $#words) { my %word = %{$words[$w_i]}; my $cv_as_text = ''; my @cvs_to_delete = (); print "$w_i: $word{text}\n "; # now loop through the consistents and find correct match for my $c_i (0 .. $#cvs) { my %cv = %{$cvs[$c_i]}; # in real life this is done with micro seconds, not file nr if( $cv{file} == $word{file}) { $cv_as_text .= $cv{text}; push @cvs_to_delete, $c_i; print $cv{text}; } # we don't want to search to the end (data is ordered) elsif ($cv{file} > $word{file}) { last; } } print "\n"; # now delete all cvs we extracted @cvs_to_delete = reverse sort @cvs_to_delete; for my $del (@cvs_to_delete) { splice (@cvs, $del, 1); } }
In reply to does splice'ing an array of hashes free memory? by lagle
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |