in reply to vanishing system call

Aha! Got something, finally.

I tried using the unless (system(...) == 0) {die...} construct suggested by sauoq above, and finally got an error message, albeit one that doesn't make much sense to me(at least initially). If I use $!, I get a message that says "not enough space"... which suggests that "sort" doesn't have enough room to create its temporary files. But I monitored /var/tmp (which is where it normally happens on this server) as I was running the script, and it never got above 33%.

So maybe something or someone changed "sort" so that it doesn't use /var/tmp anymore? That is, of course, a question for the Unix admins. I just tried the script with "sort -T" to direct the temporary files to a place where I know there is room, and it still failed with the same message. And then I tried a separate script that does nothing but the system call with the "sort -T", and it worked. Hmmmmm.... so the error message points to a space problem, but the behavior of the script points to a memory problem. I'm still confused, but it's a starting point, at least.

Thanks to all for the help, especially sauoq for the method of catching errors from a system("...") call. This has been kicking me in the head for two days now. I'm pretty sure the problem lies in the server environment, and now at least I've got something more substantial to take to the Unix admins.

Thanks....

Replies are listed 'Best First'.
Re: A clue!
by ChemBoy (Priest) on Sep 20, 2002 at 22:42 UTC

    If I use $!, I get a message that says "not enough space"...

    This is because $! is not the right variable to use, and probably has never been set in your program (and so contains some random and unrelated value). See the above post by virtualsue for the correct debugging method for system.



    If God had meant us to fly, he would *never* have given us the railroads.
        --Michael Flanders

      This is because $! is not the right variable to use

      The only time $! is meaningful with regard to system() is when the return value in $? (the right variable to use) is -1. A -1 indicates that the program didn't start. In that case, $! should tell you why.

      -sauoq
      "My two cents aren't worth a dime.";
      
Re: A clue!
by jarich (Curate) on Sep 23, 2002 at 06:55 UTC
    Talking about quotas... Have you ensured that you are allowed to use that much memory?
    Make sure that you're not exceeding your segment size. To determine this type "ulimit -a" at your solaris prompt. You'll get something like:
    core file size (blocks) unlimited data seg size (kbytes) 131072 file size (blocks) unlimited max memory size (kbytes) 1019872 open files 4096 pipe size (512 bytes) 8 stack size (kbytes) 2048 cpu time (seconds) unlimited max user processes 64 virtual memory (kbytes) 1048576
    Notice the "data seg size" there. If it's less than the total size of your files you may be hitting your limit. For example if it's set at 100MB for you then 3 files at 30MB each will process easily, but a forth file bringing the total up to 120MB will hit your segmentation size (and probably cause a "segmentation fault") and it'll be as if the sort never occured.

    If you're very lucky (assuming your system administrators like you etc) you'll be able to raise the size of your data segments with "ulimit -d <larger number here>". You probably won't be able to raise it above the "max memory size" that you've been given though.

    Hope that helps.

    jarich

Re: A clue!
by joe++ (Friar) on Sep 23, 2002 at 11:33 UTC
    "Hmmmmm.... so the error message points to a space problem, but the behavior of the script points to a memory problem."

    That reminds me of a Solaris 2.6 box where /var/tmp appeared to be memory mapped. Which was very awkward when a script ran amomk and logged the same old error message every few seconds. So we ran out of Swap rather than just ordinary disk space and all kinds of regular daemon processes died, we weren't even able to login anymore.

    You may want to talk to your sysadmins abou this one...

    --
    Cheers, Joe