This is basically the code I'm talking about. Looping through a number of files, reading them (this routine is quick), and looping through all parameter names in each file, checking if they already exists in the global @names array and take action depending on the result. It is this check (find_element) that I want to speed up. Any ideas? If hash is a good alternative, could you please explain the difference compared to an array? New at perl so I'm not so familiar with all expressions... Thank you!
foreach $filename (@input_names) #Go through input files
{
($names_ref,$data_ref) = &read_file($filename);
@tmp_names = @$names_ref;
@tmp_data = @$data_ref;
foreach $name (@tmp_names) #Go through lines of current input
+file
{
$index = &find_element($name, @names); #Check if variable
+is already present in name array (time consuming!)
if ($index==-1) #Name not present, put both name and data
+in array
{
push(@names,$name);
push(@data,$tmp_data[$tmp_index]);
}
else #Name present, only replace data for the name
{
$data[$index] = $tmp_data[$tmp_index];
}
$tmp_index++;
}
}
| [reply] [d/l] |
The main difference between a hash and an array is that an array is indexed by number while a hash is indexed by name. So where a lookup in an array is fast if you know the position of the value, a lookup in a hash is fast if you know the "name" of the value, that is, the (string) key the value is associated with.
| [reply] |
Yeah, you probably want something more like this, or something similar to it.
my %hash;
while(my $line = <>) {
my ($name, $value) = split m/something goes here/, $line;
die "error: value already defined!" if exists $hash{$name};
$hash{$name} = $value;
}
| [reply] [d/l] |