http://qs1969.pair.com?node_id=109693


in reply to Perl embedded in Kornshell

You can't just stick perl in front of a shell script. Why not just make the the whole thing a perl script. The only thing you're doing in shell is passing the contents of two files to be acted upon, and then redirecting the output to another file. Pick a language. Do it with either perl or ksh.

In Perl (since this is Perl Monks) just open each file:
open(FILE1, "/home/spjal/files/bprcf.parms") || die "Dead on file open. $!\n";

I don't know how the contents are arranged in your files, but suck out its tasty fillin':

my($scheck, $echeck); while(<FILE1>) { chomp; ($scheck, $echeck) = split; # or whatever. } close(FILE1); # Be tidy.

Do the same to the other input file, twist and spindle the data to your nefarious ends, and then open an output file:

open(OUTFILE, ">/home/spjal/files/bprcf.tmp") || die "Dead on file ope +n. $!\n"; print OUTFILE "Important results.\n";

You have many more issues to deal with regarding your Perl syntax. You're omitting semi-colons at the end of each statement. Arguments are passed via the special @ARGV array, not the "numbered" variables. $1, $2, etc. do something else in Perl. Instead of else if, you use elsif. Try the handy Tutorials section.

Replies are listed 'Best First'.
Re: Re: Perl embedded in Kornshell
by JALandry (Initiate) on Sep 03, 2001 at 22:07 UTC
    The example was awk, not perl. I have since converted the awk to perl, but I am getting a syntax message I am stuck on. Something about a right bracket missing, even though all my brackets match/balance. The chunk of code shown is just part of a much, much larger interactive script written in kornshell with some embedded awk. In this (sub)procedure, I had to actually "cut" the input file down to run through the awk code shown in order for it to process it because awk was telling me the record length as too long. Subsequently, I had to run the output of the awk procedure through a 'grep' to put the records selected back together again. The file is about 5000 records, and each record is a little over 7000 bytes long. The grep was taking forever when the range passed to the awk procedure was long. So, I am trying to convert the awk to perl to overcome the record length problem, and get rid of the 'grep' to speed up the process. The process writes out selected records from the iput file, based on whether an identifier is within the range passed on the first record as a set of 'parms'. I am not a perl programmer (yet!) so if you care to help me out, I will be glad to e-mail you the code snippet, since I do ot know how to format code in this site's text boxes. Thanks again, Jim

      OK.

      First, you seemed to have confused everyone in your original post. We can proceed now that we all have a bit more information. Second, you should check out the Site How To for formatting tips. Just put your code in <code>...</code> tags. Third, my awk is really rusty so let me know if I misread your code.

      Your file is only 33M so perl should have no problem provided you have some semblance of memory available to you. So can we assume that you do not need to do any pre-cutting?

      In awk the input file is automatically read, parsed and acted upon. In Perl you have to explicitly tell perl how you'd like your file served. So when you say:

      cat /home/spjal/files/bprcf.parms /home/spjal/files/temp | perl ' BEGIN { linectr=0 } { linectr++

      we would say:

      #!/usr/local/bin/perl -w; use strict; my $infile = "/path/to/infile"; my $outfile = "/path/to/outfile"; open(INFILE, "$infile") || die "Died openning $infile. $!\n"; open(OUTFILE, ">$outfile") || die "Died openning $outfile. $!\n"; my $linectr = 0; my $fline; my $lline; while(<INFILE>) { chomp; # Remove the newline.

      The while magic will now pull in the file line by line. I hope I understand correctly that the first line contains the four parms. Then:

      if(++$linectr == 1) { my($scheck, $echeck, $fcheck, $lcheck) = split; # Splits on + whitespace by default.

      Since I don't know the data you're processing I can't second guess your logic, so I'll Perl-ize the next bit:

      if ( $fcheck == $scheck ) { $fline=2 } else { $fline=(($fcheck - $scheck) + 3) } if ( $lcheck == $echeck ) { $lline=999999999 } else { $lline=(($lcheck - $scheck) + 3) } } elsif ( $linectr == 2 ) { print OUTFILE "$_\n"; # $_ holds the line. } elsif ( linectr <= lline && linectr >= fline ) { print OUTFILE "$_\n"; } }

      That's what I understand from your response. I've written this as a stand alone script that can be called from your shell script. You can also run it inline by changing the hash-bang line with | perl -e '.... Please feel free to post more information or questions.