Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re: framework for data parsing

by pc88mxer (Vicar)
on Jun 20, 2008 at 08:14 UTC ( #693085=note: print w/replies, xml ) Need Help??


in reply to framework for data parsing

It's not too hard to create your own framework for this purpose. Basically you have an input source which produces data records and an output formatter which prints a data record to a file. A typical data-munging program will look like:
my $input = ...create input source object... my $output = ...create output formatter object... while (my $data = $input->next) { # do something to $data $output->print($data); }

The key decision will be what to use for your data record object. A good generic choice is to use a hash-ref.

Next, for each input file format (i.e. parse specification) you need to be able to create an input source object whose ->next method will produce the next data record (or undef it there isn't any more.) Typically this will be done by reading a line from a file handle and parsing it according to some template or set of rules.

Finally, for each output format you need to be able to create an object which can 'print' a data record to a file.

And that's basically it. Once you have created all of the constructors for your input source and output formatter objects, you can freely mix and match them enabling any algorithm to read from any input source and write in any format.

Replies are listed 'Best First'.
Re^2: framework for data parsing
by Narveson (Chaplain) on Jun 20, 2008 at 21:34 UTC

    I concur wholeheartedly. Use objects and encapsulate the gory details.

    My earlier post left the details out in the open. Here's how those details fit into pc88mxer's framework.

    For each input file format (i.e. parse specification) you need to be able to create an input source object ...
    package RecordParser::FixedWidth; # constructor takes the name of the source file sub new { my ( $class, $source ) = @_;
    my $obj = { IO => $reader, template => $template, fields_ref => \@fields, }; bless $obj => $class; }
    ...whose ->next method will produce the next data record (or undef it there isn't any more.)
    sub next { my ( $self ) = @_; my ($reader, $template, $fields_ref) = @$self{qw(IO template fields_ref)}; my @fields = @$fields_ref; my $record = <$reader>; return unless defined $record;
    The key decision will be what to use for your data record object. A good generic choice is to use a hash-ref.
    return \%value_of; }
      Thanks for the valuable suggestions. Another challenge to this problem is that the data files are multi-line records. For example, if a line or row is a parent, then it is followed by a number of component rows. I am thinking of extending the basic data parser to multi-row parser and return the complete parent record (containing its component records) to the client for each next() call. What do you think about this approach? Another issue is validation. I noticed that unpack fails silently if there is a problem in parsing. So I would like to set up some sort of validation for the lines too. That should probably go in the spec for the record and be specified as a regex pattern.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://693085]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others having an uproarious good time at the Monastery: (4)
As of 2022-12-06 08:05 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found

    Notices?