This is a very common situation when "parsing" text files that contain multi-line "records". In general, there are three basic methods of attack:
- Read the file one line at a time, and use branching regex logic ( if (/this/) {...} elsif (/that/) {...} ...), and maintain a "state" variable if necessary, to manage what should be done with each line as you read it. Sometimes this can get very tricky, and you'll hear some monks say that they don't like this approach on general principles.
- If the records are consistently and unambiguously separated by some constant string pattern (e.g. blank lines, lines of "----" or "====", etc), set perl's "input-record-separator" variable ($/) equal to that string (e.g. "\n\n", "----\n" or "====\n", etc); then use the normal while (<>) operation to go through the file one record at a time.
- Set $/ = undef, read the entire content of the file as a single scalar string, and use regex matches and/or split() to break it up or play with record contents. (Of course, this may not work when the input file is too big to fit in available memory.)
Abigail's suggestion would work best using the third approach:
$/ = undef;
$_ = <>;
s/(\n)(header)/$1.$2/g; # (updated: made sure not to lose \n)
s/\n\+/ /g; # (updated: forgot to escape the "+")
print;
The list above is ranked in terms of "hardest-to-easiest", and your choice of method should be a matter of picking the easiest one that can do what needs to be done.