You'd probably have to modify CGI::Session's write()1 method to first read in any existing session data from the file (if exists), then merge it with the new session data, before writing it out again (this assumes that there are no merge conflicts, i.e. only independent parts of the data have been modifed in the meantime). There would still be a race condition left, in theory, but that might be tolerable in practice...
Otherwise, modify CGI::Session to lock the session file upon start of a request, and unlock when the request has finished. This would of course mean that other requests would have to wait for the first one to finish... Also, locking often causes its very own type of issues, for example, if - for some reason - the unlocking doesn't happen as expected (process dies, or some such), and you have to wait up until some timeout period is over, etc. .
1 P.S.: What version of CGI::Session do you have? I just checked the current source, and it doesn't seem to have a write() method, but rather calls $serializer->freeze(...) from within its flush() method (and the default serializer is implemented via Data::Dumper).
In reply to Re: Concurrent requests on the same CGI::Session
by Anonyrnous Monk
in thread Concurrent requests on the same CGI::Session
by webdeveloper
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |