in reply to Speeding up perl script

You don't give any indication of the format of the individual files - IMO, other than jetteros earlier suggestion (of using a hash), the easiest solution, subject to you having any say and a suitable format, would be to do each file - thereby satisfying both of the principle requirements since...

A user level that continues to overstate my experience :-))

Replies are listed 'Best First'.
Re^2: Speeding up perl script
by samuelalfred (Sexton) on Jan 28, 2009 at 09:13 UTC

    Thank you for your answer. And sorry for the lack of details in my description. The files I am reading are txt files with each row containing a parameter name and its value (separated by spaces). However, I don't have any difficulties reading the files, creating an array of names and an array of the corresponding values.

    What does "do" mean in this context? As I'm sure you understand, I'm quite a beginner at Perl :)
      For a description of the do operator, see perlop.

      In the light of your reply, I'd probably do something along the lines of the following (in a script)...

      use warnings; use strict; my %vars; while (<>) { local @_ = split; $vars{$_[0]} = $_[1]; }
      which takes each line of each file given on the command line and updates the %vars hash, keyed on the first field & values of the 2nd field - as per postings here and elsewhere on this thread.

      A user level that continues to overstate my experience :-))
      After creating the hash, using the parameter values as the key (see above), you can search for the parameter with:
      if (exists $vars{'parameter'}) { # It is there! } else { # It is not there }
      This avoids an iterative search through the array.