in reply to Concurrent requests on the same CGI::Session

You'd probably have to modify CGI::Session's write()1 method to first read in any existing session data from the file (if exists), then merge it with the new session data, before writing it out again (this assumes that there are no merge conflicts, i.e. only independent parts of the data have been modifed in the meantime).  There would still be a race condition left, in theory, but that might be tolerable in practice...

Otherwise, modify CGI::Session to lock the session file upon start of a request, and unlock when the request has finished. This would of course mean that other requests would have to wait for the first one to finish...  Also, locking often causes its very own type of issues, for example, if - for some reason - the unlocking doesn't happen as expected (process dies, or some such), and you have to wait up until some timeout period is over, etc. .

1 P.S.: What version of CGI::Session do you have?  I just checked the current source, and it doesn't seem to have a write() method, but rather calls $serializer->freeze(...) from within its flush() method (and the default serializer is implemented via Data::Dumper).

Replies are listed 'Best First'.
Re^2: Concurrent requests on the same CGI::Session
by webdeveloper (Novice) on Jan 17, 2011 at 12:08 UTC

    Thanks! I suspected that modifying the module might be suggested as the best approach, but wanted to check that I wasn't missing something that would enable me to achieve a 'merge' via the existing CGI::Session interface.

    I did think about the locking approach, but I'm not sure that this fits with user expectations within the context of a web application (ie. that a logout request should be pretty much instant).

    Thanks again for your feedback - much appreciated. (You are correct about $session->write - original node updated)

    .