I am trying to further wrap my head around why saving a possible 30 seconds per device in this scenario was a less than optimal approach. That is other than the fact that it causes me a lot of synchronization issues.
Okay. Using your numbers: 100 machines; 3 commands; 15 seconds per command; and 10 concurrent threads.
You process 10 commands (3 1/3 machines) every 15 seconds: 100 / 3.333 * 15 / 60 = 7.5 minutes.
I process 10 machines every 45 seconds: 100 / 10 * 45 / 60 = 7.5 minutes.
But: I've spawned 100 threads and made 100 connections. No locking, nor waiting, nor syncing to slow things down.
You've spawned 300 threads and made 300 connections. And you had to acquire locks and wait for them.
Given the IO bound nature of the problem, the locking might not slow you down too much -- assuming that you can get it right without creating dead-locks; live locks or priority inversions et al. -- but you've definitely consumed 2 or 3 times as much cpu; caused 3 times as much network traffic; 3 times the load on the remote machines; and consumed more memory; to achieve the same overall elapsed time.
It just isn't worth the hassle.
In reply to Re^7: Program Design Around Threads
by BrowserUk
in thread Program Design Around Threads
by aeaton1843
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |