I'm splitting the line and using data2 as the key for a hash…
You can do this because each log record's <data2> is guaranteed to be unique and is therefore a viable key, right?
…and then pushing date, time, and data1 into an array that is the value…
So to amplify Laurent_R's fine suggestion, you're already including in the hash values (i.e., the stored data) the timestamps that will serve as proper sort keys and that you'll therefore use to sort the records later with a Guttman Rosler Transform. You just need to ensure the sort key timestamps are in an ISO 8601 format instead of in the format they're in in the logs. This ensures that when you sort the timestamps lexicographically (ASCIIbetically), they're ordered chronologically as well.
# Parse the log record... m{^(\d\d)/(\d\d)/(\d\d\d\d) (\d\d:\d\d:\d\d) (\S+) (\S+)} or die; my $timestamp = "$3-$1-$2 $4"; my $data1 = $5; my $data2 = $6; my %myHash; push @{ $myHash{$data2}{'info'} }, "$timestamp,$data1";
And since it appears you intend to keep the sort key timestamps as data, you don't have to lop them off as you normally would in a Guttman Rosler Transform. So you won't really need to use a transform, per se, at all. You can just sort the records by their hash values.
Jim
In reply to Re: sorting logfiles by timestamp
by Jim
in thread sorting logfiles by timestamp
by jasonl
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |