None the above. And a massive apology if I wasn’t clear.
In trying to find away to improve the speed of sizing the shares - On some servers I have it could take days – I was hoping to find away to at least fire few instances to speed things up. This is, rather than garbing one share at a time, and only move on to the next when the one in hand is done. Beside the size of the folder/share, I needed the number of files and folders also.
This process took twice as long as doing the same operation manually (right clicking the folder and selecting all and the properties). So I thought threads might come in handy.
I attempted to do some thing about it few weeks’ back but hit a brick wall, because of the nature of OLE that doesn’t allow re-entry and as many Monks here suggested, I abandoned the whole thing.
Then, I had an idea, instead of having the OLE operations performed within the main script (no re-entry!), then by calling another scripts where the OLE operations can be performed, I can over come this limitation. It worked. I managed to get at least 7 folders to be processed concurrently – huge improvement in comparison, the time it took to do all seven shares was as long as only doing the largest one out of those seven folders.
However, where the script failed is when I attempted to include the number of files and folder. Monk Particle in response to my other post suggested that I drop OLE and use File::Find::Rule, and thankfully a code was also provided.
I bench marked both approaches and they both about the same speed, but I feel using file::find::rule is little bit more reliable, and contains less code.
This about sums up my reasons for these attempts with multithreading techniques. I will try to thread more than one folder using file::find::rule to obtain folder stats, I hope this would improve things ,…even a little bit speed improvements would be great aid.
You also mention that you see few things that might improve the speed my the above code, I would be very grateful for you comments, Kind Sir.
Thanks