Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Currently a ported version of our C-Shell system has been made using Perl. The problem is that our C-Shell scripts would call each other and pass along the initial STDIN without anything special being done. Example: callingcshell.csh < myval1 myval2 myval3 These myval's would then be used according to whatever STDIN call was waiting for STDIN, namely the menus. The problem is that on cshell this would work: in callingcshell.csh: myshell1.csh ( 1 STDIN request ) myshell2.csh ( 1 STDIN request ) in myshell1.csh: myshell1_1.csh ( 1 STDIN request ) each of these depending on the order called would get 1 value. ie- myshell1.csh got myval1, myshell1_1.csh got myval2 and myshell2.csh got myval3. on perl however (.prl is the extension we use ), only the myshell1.prl ( equivalent to myshell1.csh ) would get a value, the rest wouldn't get anything. Is there anything I can set or do to get Perl to behave like this?

Replies are listed 'Best First'.
Re: Perl Cascading Calls, Losing STDIN?
by bbfu (Curate) on Jun 14, 2004 at 18:42 UTC

    I'm not entirely sure I know what you're asking. I've included some test code and output below to hopefully clarify.

    [johnsca@cory johnsca]# cat tst.pl #!/usr/bin/perl my $run = $ARGV[0] + 1; chomp(my $val = <STDIN>); print "Test #$run: '$val'\n"; system('./tst.pl', $run) unless $run >= 3; [johnsca@cory johnsca]# ./tst.pl foo Test #1: 'foo' bar Test #2: 'bar' baz Test #3: 'baz' [johnsca@cory johnsca]# cat tst.inp foo bar baz quux [johnsca@cory johnsca]# ./tst.pl < tst.inp Test #1: 'foo' Test #2: '' Test #3: '' [johnsca@cory johnsca]# cat tst.inp | ./tst.pl Test #1: 'foo' Test #2: '' Test #3: ''

    Basically, the issue I'm seeing is that the subprocesses read from STDIN normally as long as it was not redirected or piped in any way. This seems strange, and I don't really have any idea why it would be handled any differently. I don't know if it's a perl issue, a shell issue, or an OS issue.

    I get the same behavior on the following perls:

    • v5.8.2 built for cygwin-thread-multi-64int
    • v5.8.4 built for i686-linux

    I tried it under bash on both systems, and ash on linux.

    Anyway, sorry I can't really give any answers but hopefully this will prompt someone a little more knowledgeable in this area than myself to chip in.

    P.S. In the future, it is best to include code samples (the minimum required to duplicate the behavior) to demonstrate exactly what your problem is.

    bbfu
    Black flowers blossom
    Fearless on my breath

Re: Perl Cascading Calls, Losing STDIN?
by Somni (Friar) on Jun 15, 2004 at 06:26 UTC

    Given the code:

    for (1..3) { print "$_: "; system(qw( perl -wle ), "print scalar <STDIN>"); }

    You will have some luck reading from a terminal, i.e. the keyboard. Each script will read as much as it can, which happens to (usually) be a single line, because that's what the user types. If you read from a file or a pipe you will more than likely encounter problems. The reason is buffering; each program reads more than it needs (some buffer size, probably 1024 bytes), in order to find the newline. One program reads 1024 bytes, buffers what it didn't use for later, and exits; any subsequent programs start at 1024 bytes in, and nothing is there, so they read EOF.

    Why a C shell program works doing this is a mystery; perhaps they aren't using buffered calls and are reading a character at a time until they get a newline. Something like this:

    for (1..3) { print "$_: "; system( qw( perl -we ), q{ while (1) { my $bytes = sysread(STDIN, my $c, 1); last if $bytes <= 0 || $c eq "\n"; print $c; } } ); print "\n"; }

    Notice this is going to be terribly slow on any sizeable input; for every character there is at least one syscall. Frankly, relying on all of the programs in a chain to read from the same filehandle is flakey and of poor design; not only do you have issues with buffering, you're relying on an implicit interface, rather than explicitly passing around what you need. You should be calling each program with the arguments it needs, or handing it the input you want from the previous program in the chain.

    For example, use a piped open (see perlipc) or IPC::Open2 to open pipes to and/or from each program, and print to them as necessary. POE may be a help, in that it allows you to spawn processes simultaneously and monitor them. It's hard to say, you didn't mention why your system is designed this way, but it sounds to me like it would benefit greatly from a redesign.

      I would love to redesign it but too many other apps depend on this design and this change would affect about 1000 scipts, so a rewrite fix is not in the cards.
        I think I may have found a work around specific to our needs. Thanks for all the info guys.