in reply to File Handling for Duplicate Records

That's higly inefficient. For each line read via INFILE1, you iterate over all the lines of the previously read filehandle INFILE2, and you are doing all the substring mumbo jumbo again and again.

It seems from your if condition, that you are only interested in ($date, $number_dialed, $connect_time).

A better approach seems to be:

If the storage needed for building the hash in the first place (reading INFILE2) surpasses your workstation's memory ressources, store them in e.g. a DB_File or DBD::SQLite database.

--shmem

_($_=" "x(1<<5)."?\n".q·/)Oo.  G°\        /
                              /\_¯/(q    /
----------------------------  \__(m.====·.(_("always off the crowd"))."·
");sub _{s./.($e="'Itrs `mnsgdq Gdbj O`qkdq")=~y/"-y/#-z/;$e.e && print}