in reply to Perl Script performance issue

This is a job for a database. In fact, very likely generating a database on the fly from your existing data then querying the database will be quicker and easier to maintain than trying to juggle a plethora of files.

See Databases made easy to get your eye in with databases. Databases aren't as hard to use for this sort of job as you probably imagine and are much easier than trying to roll your own.

Premature optimization is the root of all job security

Replies are listed 'Best First'.
Re^2: Perl Script performance issue
by dasgar (Priest) on Dec 16, 2015 at 05:36 UTC

    The problem description was kind of sounding like a database type structure. I haven't used this myself, but I believe that someone could use DBD::CSV to run SQL queries against the original files without needing to import into a database first.

      This makes things fairly easy to code, at least for relatively simple cases, but I very much doubt it would solve the performance problem.

      For me, the real solution is using either a hash structure in memory (super fast if feasible) or an actual database such as MySql or MariaDB (or possibly even SQLlite, but that might be a bit more complicated with several files) with full support to indexed data access enabling fast processing.