Building on the advice from kcott and hdb, I would suggest combining the relevant data from both files into a single %accounts hash along the following lines:

#! perl use strict; use warnings; use Scalar::Util qw(looks_like_number); use Text::CSV; my %accounts; read_accounts ('a.csv'); merge_balances('b.csv'); print "\nOverdue accounts:\n\n"; my $count = 0; for (keys %accounts) { if ($accounts{$_}{overdue}) { print 'Number: ', $_, "\n"; print 'Name: ', $accounts{$_}{name}, "\n"; print 'Created: ', $accounts{$_}{cdate}, "\n"; print 'Balance: ', $accounts{$_}{balance}, "\n"; print 'Due: ', $accounts{$_}{due_date}, "\n\n"; ++$count; } } print "Total overdue accounts: $count\n"; sub read_accounts { my $file = shift; my $csv = Text::CSV->new({ binary => 1 }) or die 'Cannot use CSV: ' . Text::CSV->error_diag(); open(my $fh, '<', $file) or die "Cannot open file '$file' for reading: $!"; $csv->getline($fh); # Discard first line (headings) while (my $row = $csv->getline($fh)) { my ($number, $name, $created) = @$row[1 .. 3]; die "Found duplicate accounts numbered '$number'" if exists $accounts{$number}; $accounts{$number} = { cdate => $created, name => $name }; } close $fh or die "Cannot close file '$file': $!"; $csv->eof or $csv->error_diag(); } sub merge_balances { my $file = shift; my $csv = Text::CSV->new({ binary => 1 }) or die 'Cannot use CSV: ' . Text::CSV->error_diag(); open(my $fh, '<', $file) or die "Cannot open file '$file' for reading: $!"; $csv->getline($fh); # Discard first line (headings) while (my $row = $csv->getline($fh)) { my ($due_date, $balance, $number) = @$row[1 .. 3]; $balance = 0 unless looks_like_number($balance); if (exists $accounts{$number}) { $accounts{$number}{balance} = $balance; $accounts{$number}{due_date} = $due_date; $accounts{$number}{overdue} = $balance > 0 && is_overdue($due_date, 10, 23, 2007); } else { warn "Account '$number' not found\n"; } } close $fh or die "Cannot close file '$file': $!"; $csv->eof or $csv->error_diag(); } sub is_overdue { my ($date, $cutoff_month, $cutoff_day, $cutoff_year) = @_; my ($month, $day, $year) = $date =~ m! ^ (\d{1,2}) / (\d{1,2}) / (\d{4}) !x; return 1 if $year < $cutoff_year; return 0 if $year > $cutoff_year; return 1 if $month < $cutoff_month; return 0 if $month > $cutoff_month; return 1 if $day < $cutoff_day; return 0; }

Output:

20:13 >perl 601_SoPW.pl Overdue accounts: Number: ar182364 Name: 12/1/2006 15:36 Created: Mayson Gettemy Balance: 266.93 Due: 10/9/2007 15:54 Total overdue accounts: 1 0:16 >

Note that there is no point in reading in the odue field from file a.csv only to overwrite it.

Hope that helps,

Athanasius <°(((><contra mundum Iustus alius egestas vitae, eros Piratica,


In reply to Re: How to check and compare data in 2 hashes based on date and conditions by Athanasius
in thread How to check and compare data in 2 hashes based on date and conditions by myhome

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.