in reply to Merge Purge

Hello

What you need is a hash of hashes with one of the values is an array of names. The keys of the hash are the matchkeys and the values would be references to arrays holding the values.

#!/usr/bin/perl -w use strict; use Data::Dumper; my @FLDS = qw(ID name address city state zip phone matchkey); my @RFLDS = qw(address city state zip phone); my @UFLDS = qw(ID name); my %records; for(<DATA>){ chomp; my %rec; @rec{@FLDS} = split/\|/; my %urec; @urec{@UFLDS} = @rec{@UFLDS}; my $key = $rec{matchkey}; defined($key) or die "This record has no key"; push @{ $records{$key}{'records'} }, \%urec; for(@RFLDS){ $records{$key}{$_} = $rec{$_} unless exists $records{$key}{$_}; } } for my $key (sort keys %records){ my %master = %{$records{$key}}; for(@{ $master{records} }){ my %rec = (%master, %$_); $rec{matchkey} = $key; print join('|', @rec{@FLDS}) . "\n"; } } print Dumper(\%records); __DATA__ 1|krazken|123 Main|BFE|AR|72210|555-2345|1 2|kraken||||||1 3|krayken|||||555-2345|1
I believe this is as effecient a data structure as one could come up with.

Here is a dump of the data structure to make you believe what the code is doing:

1|krazken|123 Main|BFE|AR|72210|555-2345|1 2|kraken|123 Main|BFE|AR|72210|555-2345|1 3|krayken|123 Main|BFE|AR|72210|555-2345|1 $VAR1 = { '1' => { 'state' => 'AR', 'zip' => '72210', 'address' => '123 Main', 'city' => 'BFE', 'phone' => '555-2345', 'records' => [ { 'ID' => '1', 'name' => 'krazken' }, { 'ID' => '2', 'name' => 'kraken' }, { 'ID' => '3', 'name' => 'krayken' } ] } };
Update: As others have already suggested, if you have millions of records, you really should consider an SQL based Database. Dumping the flat file to the database should be simple and it does avoid all the redundant information you're trying to add to your flat file.

Update2: The code presented assumes nothing about the order at which records appear on the file. The runtime and memory requirements can be reduces dramatically IF there are some assumptions about the records (e.g. Records with data always appear before records with missing data. Or records with the same matchkey always appear grouped together.) If you sort your data beforehand, then you can achieve better runtime.

Hope this helps,,,

Aziz,,,

Replies are listed 'Best First'.
Re: Re: Merge Purge
by Fletch (Bishop) on Mar 22, 2002 at 15:16 UTC

    A compromise might be to make a pass over the file once and make a DB_File database of the canonical information. Then run over the file again, and when there's missing fields consult the db for the information. That way you reduce the amount of info you have to keep in memory, at the cost of running over the file twice.