http://qs1969.pair.com?node_id=693052

sam700 has asked for the wisdom of the Perl Monks concerning the following question:

Hi, I need to parse data from many data sources which provide their data files in different formats along with a specification to parse the data. The data gets parsed into different formats as required by our systems. So there is: - Input data - Parse Specification - Output format - Output data The specification looks something like -- fieldA is from character 9 to 14 or field B is ... The files are all text files. Since this seems to be a common problem, I am wondering if there is a small framework out there I could use for this purpose. I can extend it to adapt to my own problems. But the idea is to make this as extensible and manageable as possible since there can be new data sources, change in specification or new specification, and change in output format. Any suggestions are most welcome. thank you, Sam

Replies are listed 'Best First'.
Re: framework for data parsing
by ady (Deacon) on Jun 20, 2008 at 05:00 UTC
    Some good design patterns for tasks of this kind can be found in : Data Munging with Perl
    Best Regards --
    Allan Dystrup
Re: framework for data parsing
by Narveson (Chaplain) on Jun 20, 2008 at 04:26 UTC

    To begin with your concrete example ...

    The specification looks something like -- fieldA is from character 9 to 14
    my $TEMPLATE = '@8A6'; # oops, originally posted without the quotes while (<DATA>) { my @fields = unpack $TEMPLATE, $_; # for output try pack or printf }

    This generalizes. The template for unpack should be machine-generated, to avoid off-by-one errors and other typos.

    In what follows, let's suppose you have collected a list of column specifications. Each specification tells you

    • a field name,
    • an offset, and
    • the width of the field in your fixed-width extract.
    You might get this from a config file of some sort, or as the result set from a database query, if you happen to have saved your parse specifications in a database table.

    use DBI; my $dbh = ... my $sth = $dbh->prepare( 'SELECT field, offset, width' . ' FROM Source_Field' . ' WHERE source = ?;' ); my $source = 'input_file.txt'; $sth->execute($source); my $template; my @fields; while (my $column_spec = $sth->fetchrow_hashref() ) { my ($field, $offset, $width) = @$column_spec{qw(field offset width)}; $template .= "\@${offset}A$width"; push @fields, $field; } open my $reader, '<', $source; while (<$reader>) { my %value_of; my @values = unpack($template, $_); @value_of{@fields} = @values; # you've got your current record in a hash # print it or save it somewhere }
      Bareword found where operator expected at 693061.pl line 1, near "@8A6 +" (Missing operator before A6?) syntax error at 693061.pl line 1, near "@8A6" Execution of 693061.pl aborted due to compilation errors.

      ( I'm paraphrasing your earlier reply. Normally, I would just have sent a message for such a small oversight. )

Re: framework for data parsing
by pc88mxer (Vicar) on Jun 20, 2008 at 08:14 UTC
    It's not too hard to create your own framework for this purpose. Basically you have an input source which produces data records and an output formatter which prints a data record to a file. A typical data-munging program will look like:
    my $input = ...create input source object... my $output = ...create output formatter object... while (my $data = $input->next) { # do something to $data $output->print($data); }

    The key decision will be what to use for your data record object. A good generic choice is to use a hash-ref.

    Next, for each input file format (i.e. parse specification) you need to be able to create an input source object whose ->next method will produce the next data record (or undef it there isn't any more.) Typically this will be done by reading a line from a file handle and parsing it according to some template or set of rules.

    Finally, for each output format you need to be able to create an object which can 'print' a data record to a file.

    And that's basically it. Once you have created all of the constructors for your input source and output formatter objects, you can freely mix and match them enabling any algorithm to read from any input source and write in any format.

      I concur wholeheartedly. Use objects and encapsulate the gory details.

      My earlier post left the details out in the open. Here's how those details fit into pc88mxer's framework.

      For each input file format (i.e. parse specification) you need to be able to create an input source object ...
      package RecordParser::FixedWidth; # constructor takes the name of the source file sub new { my ( $class, $source ) = @_;
      my $obj = { IO => $reader, template => $template, fields_ref => \@fields, }; bless $obj => $class; }
      ...whose ->next method will produce the next data record (or undef it there isn't any more.)
      sub next { my ( $self ) = @_; my ($reader, $template, $fields_ref) = @$self{qw(IO template fields_ref)}; my @fields = @$fields_ref; my $record = <$reader>; return unless defined $record;
      The key decision will be what to use for your data record object. A good generic choice is to use a hash-ref.
      return \%value_of; }
        Thanks for the valuable suggestions. Another challenge to this problem is that the data files are multi-line records. For example, if a line or row is a parent, then it is followed by a number of component rows. I am thinking of extending the basic data parser to multi-row parser and return the complete parent record (containing its component records) to the client for each next() call. What do you think about this approach? Another issue is validation. I noticed that unpack fails silently if there is a problem in parsing. So I would like to set up some sort of validation for the lines too. That should probably go in the spec for the record and be specified as a regex pattern.
Re: framework for data parsing
by metaperl (Curate) on Jun 23, 2008 at 14:53 UTC
    But the idea is to make this as extensible and manageable as possible since there can be new data sources, change in specification or new specification, and change in output format. Any suggestions are most welcome. thank you, Sam
    My comments are this:
    1. I think it is best to abstract from something that is working, rather than create something which is abstractly satisfying that you need to get practical results from. In other words, I would just get some sources parsed and then allow the patterns to show themselves and then abstract that into a framework as a framework is needed. Also, I hope that a friend of mine can dig up the link to a CPAN data processing framework that I once saw.. the only only one I found via google was Data::Consumer
    2. There are several modules for parsing fixed with (fixed-length) data:
      1. Parse::FixedLength
      2. Text::FixedLength
      3. DataExtract::FixedWidth