in reply to Re: 3 weeks wasted? - will threads help?
in thread 3 weeks wasted? - will threads help?
In my readmore tags - I explained that each copy works on a different directory - each directory is very transient and that is a race condition beyond my control
The extra memory overhead is coming from the Perl intrepreter - not the code itself (or at least that is my belief) - see below:
#!/usr/bin/perl -w use strict; while (1) { print "I am only printing and sleeping\n"; sleep 1; }
Forking will not buy me anything as I understand it since I will be making an exact duplicate (memory and all). I was thinking threads may help, but as I understand them - each thread gets its own copy of the intrpreter - no memory savings either
So my question stated more clearly is:
Given a piece of code to parse a single directory, how can I parse multiple directories concurrently (or very nearly) without the memory overhead of each piece requiring its own intrepreter?
Concatenating the files in each directory into one long list isn't feasible either.
I freely admit that I may be asking to get something for nothing, but it seems like an awful waste not to be able to use the Perl code and continue using the shell script :-(
Cheers - L~R
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Re: Re: 3 weeks wasted? - will threads help?
by perrin (Chancellor) on Jan 27, 2003 at 22:57 UTC | |
by Limbic~Region (Chancellor) on Jan 27, 2003 at 23:12 UTC | |
by perrin (Chancellor) on Jan 27, 2003 at 23:39 UTC | |
by Limbic~Region (Chancellor) on Jan 28, 2003 at 00:50 UTC | |
by perrin (Chancellor) on Jan 28, 2003 at 02:24 UTC | |
|
Re^3: 3 weeks wasted? - will threads help?
by adrianh (Chancellor) on Jan 27, 2003 at 22:49 UTC |