DAVERN has asked for the wisdom of the Perl Monks concerning the following question:
Beginner on PERL do not find the way to skip some lines (headers, table name) and do something else with the rest of the lines (print with some modifications), could someone help me?
Data as follows
TABLE NAME HEAD0 HEAD1 HEAD2 DATA00 DATA10 DATA20 DATA01 DATA11 DATA21 END
i need the next result
xxx=DATA00, xxx=DATA10, xxx=DATA20;
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: text processing
by Limbic~Region (Chancellor) on Apr 22, 2014 at 17:23 UTC | |
There are many ways to do this because one of Perl's mottos is there is more than one way to do it. The first method is probably the most common and easiest to think of. At the top of the loop, skip lines that you don't want
Another common first step might be to throw away the first N lines
Sometimes it gets more complicated and you need to check a state variable against multiple lines. I won't give you an example of that abstract case but I will show you what some people do: The above has a micro-optimization which you should avoid unless you need it. Essentially, it avoids paying the penalty of checking to see if we are in the good data against all lines and starts a new loop with only the processing we care about. The final method I will share is where you extract or eliminate what you don't want. As you can see, there are many ways to do what you are looking to accomplish. If you don't understand something, please ask. Cheers - L~R | [reply] [d/l] [select] |
by DAVERN (Initiate) on Apr 22, 2014 at 17:49 UTC | |
Hi Limbic~Region, i did it on two separate programs on the first one i delete from the file the lines i do not use and i generate a new file, on the second program i process the rest of the text, i want to join it but do not find the way my $output = 'output.txt'; open my $outfile, '>', $output or die "Can't write to $output: $!"; my @array = read_file('file1.log'); for (@array){ next if ($_ =~ /^\TABLE NAME|HEAD0|END|^\s+$/); print $outfile $_ ; Second file: open my $IN, '<', 'output.txt' or die $!; my @lines = <$IN>; close $IN; open my $OUT, '>', 'file2.txt' or die $!; for my $line(@lines){ chomp $line; my @data = split /\s+/, $line; print {$OUT} "xxxxx", $data[0], "yyy", $data2,";","\n"; } close $OUT; I do not have idea of to do it all in only one program BR | [reply] |
by InfiniteSilence (Curate) on Apr 22, 2014 at 18:27 UTC | |
Your focus appears to be all wrong. If you are looking for something specific in a file why not just select that thing? Produces...
Celebrate Intellectual Diversity | [reply] [d/l] [select] |
|
Re: text processing
by davido (Cardinal) on Apr 22, 2014 at 19:34 UTC | |
The problem seems simple enough, but isn't specified completely enough for a complete solution that doesn't involve a bit of lucky guessing. Is there another record that comes after END, for example? Are the number of fields the same for each row? Are there always two rows per record? At minimum, it does appear that you're dealing with fixed-width fields, and that you want to skip the first four lines. It's not clear to me what you want to have happen after "END" (continue on to a new record, or stop? And will that next record have its own headers? Will it have the same format as the first record? For fixed-width fields, you might want to use unpack, as my @fields = unpack '(a7x)2a7', $_;, for example. This will have to come after whatever logic you use to disqualify some lines. That logic might look like this:
This would change a bit if there are more than one record you're interested in. You might incorporate the flip-flop operator like this:
(Updated to demonstrate pushing records onto a "@recs" array.) Dave | [reply] [d/l] [select] |
by locked_user sundialsvc4 (Abbot) on Apr 23, 2014 at 02:15 UTC | |
Actually, David, in this case I do believe that there’s enough information here to point to a classic, awk-inspired solution. The “set of records of-interest” is clearly bounded by an identifiable “start” and “end” record, and, within that space, the set of records which contain information-of-interest are readily identifiable. Thus, logic could be written, I think, based only on the file-example presented in the original post. And this logic would basically be in-keeping with the metaphor that the awk tool already employs. (Which means, of course, that a very short Perl program could also do the same.) | |
|
Re: text processing
by kcott (Archbishop) on Apr 23, 2014 at 13:10 UTC | |
G'day DAVERN, Welcome to the monastery. You originally wrote: "i need the next result This is easy:
Output:
You then showed some unformatted code: "print {$OUT} "xxxxx", $data[0], "yyy", $data2,";","\n";" Guessing that's supposed to be:
This is only slightly less easy. Just change the print line to:
Output:
Please give careful consideration to what you are attempting to achieve before posting. You'll find monks may be less inclined to help if you keeping changing what you want. Your code for opening files seems absolutely fine. Change <DATA> to <$your_input_filehandle> and print ... to print {$your_output_filehandle} ... and that should do what you want. -- Ken | [reply] [d/l] [select] |
|
Re: text processing (csv)
by Anonymous Monk on Apr 22, 2014 at 17:24 UTC | |
| [reply] | |
|
Re: text processing
by locked_user sundialsvc4 (Abbot) on Apr 22, 2014 at 18:22 UTC | |
This sort of problem is, actually, extremely common. It is, in fact, the inspiration for the awk tool that was one of the original inspirations for Perl. In general, problems like this one can solved “text line by text line,” and can be reduced, algorithmically speaking, to four cases, all of which can (somehow) be recognized by the contents of the line (and/or by “beginning of file” and/or “end of file”): Your immediate requirement could actually be addressed entirely by awk, and nothing else, and perhaps for this very reason you might elect to do so. In any case, the man-page hyperlinked above should now be read carefully and thoroughly. | |