in reply to Assistance with Comma parse

Lots of time, a good data structure is the key. In this case, use a HOA to group nodes.

use Data::Dumper; use strict; use warnings; open DATA, "<", "foo.dat"; my $data = {}; while (my $line = <DATA>) { my @columns = split /,/, $line; push @{$data->{$columns[0]}}, [$columns[1], $columns[2]]; } close DATA; print Dumper($data);

With the sample data you give, here is the output:

$VAR1 = { 'AH2S21003' => [ [ '2004-01-16 02:23:05.000000', 'ANE4987E Error processing ' ], [ '2004-01-16 02:24:05.000000', 'ANE4987E Error processing ' ], [ '2004-01-16 02:24:05.000000', 'ANE4987E Error processing ' ] ], 'AH2D21001' => [ [ '2004-01-15 22:57:32.000000', 'ANE4987E Error processing ' ], [ '2004-01-15 22:57:33.000000', 'ANE4987E Error processing ' ], [ '2004-01-15 22:57:34.000000', 'ANE4987E Error processing ' ] ], 'ESI2A55P' => [ [ '2004-01-16 04:21:43.000000', 'ANE4037E File Skipped ' ], [ '2004-01-16 04:25:43.000000', 'ANE4037E File Skipped ' ], [ '2004-01-16 04:27:43.000000', 'ANE4037E File Skipped' ] ], 'ABHS00001' => [ [ '2004-01-16 01:43:24.000000', 'ANE4987E Error processing ' ], [ '2004-01-16 01:46:24.000000', 'ANE4987E Error processing ' ], [ '2004-01-16 01:49:24.000000', 'ANE4987E Error processing ' ] ] };

Replies are listed 'Best First'.
Re: Re: Assistance with Comma parse
by Anonymous Monk on Jan 22, 2004 at 12:19 UTC

    That's a good way to store the data structure (I was going to reply and suggest a hashref of arrayrefs). I'd just like to add a caveat: be careful, slurping an entire file into memory can be a bad idea if the file is too big -- especially if you're going to manipulate the data.

    A better bet may be to create an array of filehandles, append the data to each filehandle, and the reread in to reduce the total amount of memory you're going to need.

    -Dan