in reply to Array of Hashes of Arrays with Counts of Unique Elements

jcrush:

Assuming you're parsing log files (naturally sorted in order by time), I'd suggest something like this:

my ($prevDateHour,%curHour); while (<>) { my ($dateHour, $usrMac, $apMac) = parsit(); if ($dateHour ne $prevDateHour) { for my $ap (sort keys %curHour) { print $prevDateHour, $ap, scalar(keys %{$curHour{$ap}}), " +\n"; } %curHour=(); $prevDateHour = $dateHour; } $curHour{$apMac}{$usrMac} = 1; }

(If they're not sorted, or you have multiple files, merge/sort them into a single file first.) This solution will work no matter *how* many hours your days have. ;^D

Update: Tweaked code a little (reset curHour, prevDateHour so reporting could work correctly).

...roboticus

When your only tool is a hammer, all problems look like your thumb.

Replies are listed 'Best First'.
Re^2: Array of Hashes of Arrays with Counts of Unique Elements
by jcrush (Acolyte) on Apr 25, 2014 at 20:11 UTC

    Thank you, roboticus.

    Yes, I will be merging the files from 6 servers into either 1 large file or using the diamond <> or @ARGV for multiple consecutive files.

    Thanks, jcrush