in reply to Can't spawn "cmd.exe": No error at

I second the earlier requests for more information, but have an additional idea: is it possible that you have exceeded Windows' "process table" or whatever its called? That is, might you have too many processes running? Also, what version of Windows are you using?

--traveler

Replies are listed 'Best First'.
Re: Re: Can't spawn "cmd.exe": No error at
by mgibian (Acolyte) on Jul 17, 2003 at 21:53 UTC
    Time for a follow up to my original posting...

    First, and more immediately important, I am no longer encountering the problem. I do not say that I have fixed or resolved it, as I am really not sure what was wrong and why it is no longer happening. The only thing I can say is that I copied the system statement from my example, inserted it next to the one that was failing, and then edited it to reflect the few differences between the one that was working and what I needed the failing statement to do. At this point it started working. I do not see WHY it should have started working, nor why it was failing. But the bottom line is that it is now working.

    There was a question as to why I was using the @cmd array variable. I am embarrased to say that it was just sloppiness. I had copied sections of code from various examples around the Internet and apparently had included this extraneous step when I copied the system usage code. I have removed that extra bit of logic and tested things successfully, so thank you for the observation.

    I am running this on both Windows 2000 Pro and Windows XP Pro. The machines themselves are quite big, (System 1 - P4 3.GHz with hyperthreading, 1GB, 800MHz FSB, Serial ATA disks in RAID configuration, 875 chipset ASUS motherboard; System 2 - dual P4 2.2 GHz, 1GB, IDE RAID) so there is no way I should be running into a process table size limitation. Certainly not with the small number of processes I'm using for this build vs. the large number that run when I'm performing some other tasks.

    There are a few things I still need to add to this script set:

    1 - Concurrency - There are steps that can run concurrently, particularly when I'm building bits on other machines (a set of Unix/Linux systems for example). I haven't taken the leap into multi-threading my Perl, though I've done plenty of C/C++ multi-threaded code, so I'm not entirely clueless. I'd appreciate some advise and/or pointers to good example code that I can copy from rather than having to spin my own from scratch.

    2 - Improved logging - right now I have a mishmash of logging going on. Some things get written directly to a "trace" log, things building on remote systems don't get results until the entire build on them has finished when I'd really like to get some interactive feedback of what is happening, some system commands get redirected to their own log file (primarily so their results can be checked), which also makes useful progress reports far more difficult. I'd love some advise and/or pointers on how I might clean this up.

    3 - I need to automate some tasks that actually occur on an MVS system (VSAM+CICS, VSE+CICS, "Started Task+CICS" with some VM sprinkled in). Doing this stuff manually with a "green screen" emulator takes far too long and is much too error prone. But my local MVS system analyst/software developer/administrator keeps telling me it can't be done. I'm sure it CAN be done, though not sure how, and certainly it still could take a LOT of effort. Any advise and/or pointers would be quite appreciated.

    So, thank you everyone for your help. It is invaluable as always, even if in this specific case the initial problem seems rather elusive and has vanished of its own accord.

    -Marc

      I do concurrent builds on many different Unix boxes from Windows, not by using threads, but by using remote shell (rsh client comes standard with Windows). The remote shell fires off a nohup Unix build script that writes its output to a Unix log file. After starting the Unix build script, the local rsh exits; this puts no strain on my Windows box at all and by firing off builds like this on 10 different Unix boxes, I am getting concurrent Unix builds; and I can check the status of the builds simply by running a remote shell that greps/tails the Unix log files. This whole process is easily automated via a Perl script.

      On a system lacking remote shell (such as MVS perhaps) you might try using CPAN Net::Telnet module to automate everything you currently do by hand via telnet. This is not as nice as rsh because you need to "delay here" and "expect this", but it can be done (I have used this technique to control remote builds and regression tests on remote systems lacking rsh).

        I am already using Net::Telnet for running my Unix builds, currently synchronously, and my VMS builds, where I just submit a VMS batch job. The Unix side is only a small problem as they are very small bits and build very quickly. There is only modest improvement to be had by running these concurrently. I had thought using Perl threading might be a simple approach, but of course running them as "batch" jobs and going back to look at their results is probably the simplest way to go. The VMS builds take a great deal of time, both due to the amount of code being built, but also the slowness of the machines I am using. I do need to add code to go back and check logfiles for completion and success. Right now that is a manual process with the VMS builds. I am thinking Net::FTP might be the simplest way to go for checking the log files from batch jobs.

        MVS poses the greatest challenge. Note that these old beasts still use block mode terminal emulations for interactive use (the infamous "green screen"). While it appears possible to telnet to the various MVS environments, the server is not a standard telnet server. It would not make sense for it to be one since there is no "command line" environment for this beast, only the block mode full screen environment. And thus my question about automating things. It is not clear to me how one goes about this, yet I am sure it is being done. One strategy that makes sense is to submit batch jobs that do the work needed, and then use FTP to fetch results. But its not even clear to me how to do even this, nor that batch jobs can perform all the necessary tasks (such as shutdown a cics partition as one example).

      2&>error_file is not DOS/WindowsCMD/WhateverSoftCommandPrompt's syntax, it is 2>error_file
      2&>error_file is (I luv) Unix.
      you should always be careful about pasting code that uses system calls, as it is very different between OS'es.

        This "advice" is wrong. Windows CMD does support 2>&1 and has since at least NT3.51 if not before.


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "When I'm working on a problem, I never think about beauty. I think only how to solve the problem. But when I have finished, if the solution is not beautiful, I know it is wrong." -Richard Buckminster Fuller

        To clarify, I am looking for the best way to capture all output from an arbitrary command run on windows via Perl with the maximum amount of error handling. My understanding is that system coupled with output redirection is the best way to do this, but I'd happily stand corrected if there's a better way. Otherwise, is:
        system("foo > stdoutfile 2> stdoutfile");
        the best way to capture all output from the command foo? Are there any buffer issues by having to specify the file twice rather than the Unix technique of copying the descriptor with 2>&1?