Think about Loose Coupling | |
PerlMonks |
comment on |
( [id://3333]=superdoc: print w/replies, xml ) | Need Help?? |
Indeed, your threads rewrite scales linearly and I will adopt it, thank you. However, I'm still intrigued as to why the fork version behaves as such.
My "production" system is a 10-year old, 8-thread i7 but with plenty of RAM (32GB) to handle the task at hand. Initially, the script didn't even have a fork limit neither waitpid() or wait(), except the final one after the regex loop, as I just used the solution presented to me by Marshall on my previous question and it didn't make a difference in performance anyway, as I experimented with various limits ranging from 8 to 512, thinking at first that unconstrained forking was the cause. I noticed your version only pumps "user" time on my system monitor, while fork shows a significant amount of "system" time, at least 10% of total CPU, whether $maxforks is 8 or unlimited. All these said, I'd still like to find out why the initial script becomes so slow when the files to process multiply, even when I limit its scope to the first 1000 files. It's almost unreasonable. In reply to Re^2: Script exponentially slower as number of files to process increases
by xnous
|
|