in reply to Re: Re: Re: 3 weeks wasted? - will threads help?
in thread 3 weeks wasted? - will threads help?
I admit that I thought the enormous sz from ps in each child processes I forked came from its own instance of the Perl intrepreter, but I never claimed that HPUX didn't use copy-on-write.
My problem is I have no way of profiling it - how can I tell how much memory is really being used and how much is shared.
I have thought of a few more ways to optimize speed and memory allocation of the original code, but that won't get rid of the overhead I mentioned by my example of just a simple script.
#!/usr/bin/perl -w use strict; while (1) { print "I am only printing and sleeping\n"; sleep 1; }
That tiny program shows up in ps with a sz comparable to my full blown script.
If I can't tell how much of that is shared by fork'ing another process - I have no idea if the project is viable or if it should be scrapped.
Now your proposal is a tad bit different from the others as your forked children die returning all memory to the system, they are just spawned each iteration, which means the memory is MORE available (during sleep) to the system and since all the variables will be pretty stagnant once the child is forked - it won't start getting dirty before its dead. This is food for thought.
Thanks and cheers - L~R
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Re: Re: Re: 3 weeks wasted? - will threads help?
by waswas-fng (Curate) on Jan 28, 2003 at 15:23 UTC |