in reply to Parsing a Formatted Text File

Pharazon:

I'd suggest rather than using a fixed number of lines, you find features in the files that you can detect and verify. Then you can parse the file without relying on the number of lines. That way, if someone gets a letter with an unexpectedly large number of line item details you won't lose the data.

For example, if the line that looks like an account number is the first interesting line in the record, you could write a routine that will bundle up a package of lines split by lines with an account number, something like this:

sub get_record { my $FH = shift; state $previous_account; my @record; while (my $line = <$FH>) { if ($line =~ /^\s{1,10}\d{8}-\d{14}\s*$/) { # Found an account number if (!defined $previous_account) { # It's the first, so just store it and continue $previous_account = $line; } else { # Add account number to start of record unshift @record, $previous_account; # Save current account for next record $previous_account = $line; return @record } } else { push @record, $line; } } # Be sure to return the record when the file ends, too unshift @record, $previous_account; return @record; }

If the file is from a mainframe, then there may also be page-control character that you could use to split the file apart with.

...roboticus

When your only tool is a hammer, all problems look like your thumb.