http://qs1969.pair.com?node_id=148997


in reply to Save 2 files' diff as 3rd file

The file is small because of many duplicate keys it would seem..

The output is out of order because hash is not meant to preserve order of key creation. You will need to keep a separate index of timestamps and sort by them if that is what you want. But even so, for a given log entry line with several duplicates, it will only pick up the last one. So the ideas of "all unique keys" and "chronological order" conflict. You could sort alphabetically easily enough though.

Algorithm::Diff is going to do a diff, which tells you what changes are needed to turn one array into another. It is not a unique keys intersection.

Hard to see what you want exactly, since when you mention log file I am expecting a timestamp on each line which would make every line unique. So I'll suppose you have no timestamps to worry about. But you do care about chronological order in each log file, and you I assume want to subtract the elements of err from all.

You can dump @err into a hash (call it %errhash) just to speed up lookups, but you still need to step through each array because even if it looks like an entry in %errhash matches one in @all, it might really be chronologically much later. So just use a hash for an exists test but step through the array to maintain order. It is still a difficult problem because you cannot know about which segment of err matches which part of all easily. This is the problem addressed by the LCS test in Algorithm::Diff.

Of course if you want to erase every appearance in @all of each element in @err, that is easier. One way to do it (untested) would be to

my %allhash; my @result; $allhash{$_} = 1 foreach (@all); delete @allhash{@err}; # remove els for which key in err foreach (@all) { push (@result, $_) if exists $allhash{$_}; } print "$_\n" foreach (sort @result);