PoorLuzer has asked for the wisdom of the Perl Monks concerning the following question:

The requirements are :

Fact 1 : We have some data files produced by a legacy system

Fact 2 : We have some data files produced by a new system that should eventually replace the legacy one

Fact 3 :

  1. Both the files are text/ASCII files, with records being composed of multiple lines.
  2. Each line, within a record, consists of a fieldname and fieldvalue.
  3. The format in which the lines are presented are different between 1 and 2, but fieldname and fieldvalue can be extracted from each line through use of regex
  4. Field names can change between 1 and 2, but we have a mapping that relates them
  5. Each record has a unique identifier that helps us relate the legacy record with a new record as ordering of records in the output file need not be same across both systems.
  6. Each file to compare is a minimum of 10 MB to an average case of 30 - 35 MB

Fact 4 : As and when we iterate though building the new system, we would need to compare the files produced by both systems under exact same conditions and reconcile the differences.

Fact 5 : This comparison is being done manually using an expensive visual diff tool. To help in this, I wrote a tool that brings the two different fieldnames into a common name and then sorts the field names in each record, in each file, so that they sync in order (new files can have extra fields that is ignored in the visual diff)

Fact 6 : Due to the comparison being done manually by humans, and human making mistakes, we are getting false posetives AND negatives that is significantly impacting our timelines.

Obviously the question is, what should 'ALG' and 'DS' be?

The scenario I have to address :

I want to build a PERL program that will

  1. read relevant info from both files into an datastructure 'DS'
  2. process and find the differences using algorithm 'ALG', between records from the DS
  3. Display/report the statistics to the end user, like how many lines (values) differed between the records, where they differ or are the values completely different, are lines missing (files from new system can have extra fields, but they MUST contain all lines that are there in the files produced by the legacy system)
My suggestions for:

DS : Multiple nested hash tied to disk.

Looks like:

$namedHash { unique field value across both records } = { legacy_system => { 'goodField' => 'I am good!', 'firstField' => 1, 'secondField' => 3 }, new_system => { 'firstField' => 11, 'secondField' => 33, 'goodField' => 'I am good!' } };

ALG : Custom key - by key comparison between anonymous hashes pointed to by legacy_system and new_system keys. Any differences will be noted down by inserting a new key 'differences' that will be an array of field names that differ between legacy and new system.

Hence, for this example, the output of my ALG will be:

$namedHash { unique field value across both records } = { legacy_system => { 'goodField' => 'I am good!', 'firstField' => 1, 'secondField' => 3 }, new_system => { 'firstField' => 11, 'secondField' => 33, 'goodField' => 'I am good!' }, differences => [firstField, secondField]; };
What would you have done/suggest in this given scenario?

Replies are listed 'Best First'.
Re: Comparing records in file and reporting stats - Scenario 2
by jorgegv (Novice) on May 21, 2009 at 16:08 UTC

    First of all: do you really need the structure to be tied to disk? 35MB for a worst case does not seem to be too huge a file for the common amount of RAM nowadays. Why don't you slurp the whole input files in memory, process them, then write your report?

    My DS would be almost the same as yours:

    $legacy_data { unique field value across both records } = { 'goodField' => 'I am good!', 'firstField' => 1, 'secondField' => 3 }; $new_data { unique field value across both records } = { 'firstField' => 11, 'secondField' => 33, 'goodField' => 'I am good!' }; $differences = [ firstField, secondField];

    That is, I don't strictly see the need for all data to be in ONE data structure.

    Regarding ALG, just populate both data structures by reading all files in memory at once, then process the records with something like

    foreach my $record_id (keys %legacy_data) { my $legacy_data = $legacy_data{$record_id}; my $new_data = $new_data{$record_id}; .... # here go all the tests you said .... }
    And then emit your report.
Re: Comparing records in file and reporting stats - Scenario 2
by ig (Vicar) on May 22, 2009 at 03:20 UTC