in reply to Re: Why won't this Deadlock?
in thread Why won't this Deadlock?
Being short of time, I'm thinking of a shade of mod_perl on Steriods.
This example is a synchronization primitive, called a promise.
Or at least it looks like that.
I've got some more posts coming...
Were you able to run the example?
Did it deadlock for you?
Cheers,
Jambo
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^3: Why won't this Deadlock?
by marioroy (Prior) on Jul 12, 2017 at 00:57 UTC | |
Update: Changed max_age from 3 seconds to 1 hour. Was missed before posting. Hi Jambo Hamon, The following is a FCGI::ProcManager + MCE::Shared demonstration. One can make use of MCE::Shared and have a fast cache. Perhaps a nosql-like object is handy as well for session data during multi-page HTTP requests. The MCE::Shared::Cache module is a hybrid LRU-plain implementation. The MCE::Shared::Minidb nosql-like module is Redis-like for the API.
For maximum performance, ensure Perl has Sereal::Encoder/Decoder 3.015+ installed. IO::FDPass 1.2+ is beneficial if wanting to construct a shared queue. All was done in MCE::Shared::Cache and MCE::Shared::Minidb to run with low memory consumption and maximum performance. For example, MCE::Shared::Cache makes use of dualvar to hold the time expiration along with the key internally. Only enable what you want. Basically, do not enable max_age if not needed for maximum performance. The OO interface for shared objects saves you from having to handle mutex at the application level, unless of course wanting to wrap a mutex (enter) around multiple shared actions. Regards, Mario | [reply] [d/l] |
by Jambo Hamon (Novice) on Jul 12, 2017 at 01:28 UTC | |
Do you think the following is proof of the promises mechanism? I cannot wait to have a few "@beers" on the dock. Man, I slay me. Now I could show you. Is it possible that the index $t{index}->{prev}{cur}{rgy} is being mis-counted or not? That's the one I am hunting.
All the Best,
Now the evidence is tampered with. I just found a bug. Counting g twice in $rgy.
| [reply] [d/l] [select] |
by marioroy (Prior) on Jul 12, 2017 at 04:08 UTC | |
Hi Jambo, Incrementing a shared value (++) involves FETCH and STORE meaning two IPC trips behind the scene. It becomes more expensive for deeply-shared structures. I see the reason for wanting 3 mutexes to control which one goes first. However, having many mutexes may likely behave similarly to running without any mutex when involving multiple IPC trips.
Another possibility is 1-level shared-hash with compounded key names.
To resolve the mis-counting issue, I constructed another mutex named $mutex and wrapped the operations inside it.
Here's an optimized version to rid of the extra mutex. It runs about 6 times faster over the original code. I've made a custom hash package based on MCE::Shared::Hash. Calling the OO method pipeline_eval sends a list of actions to the shared-manager to perform, where the data resides. In essense, what this does is removes the deeply-sharing aspect from the picture. Only the outer-most hash is shared. The commands for updating the hash are combined and sent via 1 IPC call.
Below is the dump output from perl script.pl --iterations=5000 --stats=1
Update: After running, the overhead of the shared-variable isn't needed. Therefore, export the shared hash into a normal hash. Optionally, untie the shared hash.
Regards, Mario | [reply] [d/l] [select] |
by Jambo Hamon (Novice) on Jul 12, 2017 at 11:41 UTC | |
by Jambo Hamon (Novice) on Jul 12, 2017 at 19:19 UTC | |
by marioroy (Prior) on Jul 13, 2017 at 03:29 UTC | |
by Jambo Hamon (Novice) on Jul 12, 2017 at 02:08 UTC | |
that means rg was the first to run!
Like a Pony,
| [reply] [d/l] |
by Jambo Hamon (Novice) on Jul 12, 2017 at 02:14 UTC | |