in reply to Re: apache/perl caching problem
in thread apache/perl caching problem

"...you sometimes receive old content after having received new content..."

That is EXACTLY what I'm getting...and it could be any version in between! I'm not sure on the proxy servers, but I doubt it. As far as I know this is a basic intranet server. I need to hear back from the guy that does the network architecture to know for sure...

I guess I should have been clearer on what I am getting. When I modify my HTML templates, the output is immediately correct. When I modify my perl script, I have to refresh several times to maybe see the change. ...and once I do see the change, I may get the older version at any time. The portion that is generated with the perl is seemingly random. Some crazy examples:

I suppose the last 2 examples could be a previously cached local version, but to me the email screams "server problem". Frustrating, to say the least.

I haven't checked the access logs, but when I receive an error, it's in the error logs and of course the email error is generated. I can check the access log, of course it may take me a while to PURPOSELY recreate the problem!

...Yes, I've tried the ctrl+f5, clearing my cache multiple times, changing browser settings, blah blah blah, and anything else that would point to a local caching problem... I can also recall having end users test it for me that have NEVER been to the page, however they may get a previous version.

I just scanned those links you referred and haven't read them in depth yet, but wouldn't controlling the caching in the HTTP headers only effect the HTML and not the perl behind the scenes?

Replies are listed 'Best First'.
Re^3: apache/perl caching problem
by ig (Vicar) on Apr 23, 2010 at 23:42 UTC

    Caching control in the HTTP headers controls what the clients (including intermediate proxy servers) do with the response and subsequent queries for the same URL. It doesn't matter whether the response came from a static HTML file or was produced by a script.

    To be certain the responses are coming from your server, I would run a network sniffer on the server (e.g. wireshark) and observe the query and response.

    After confirming the bad responses are coming from the server, I would investigate the server configuration. Given the unusual behavior, I would make no assumptions, so I would begin by determining what process is listening on the port that accepted the connection over which the request and bad response were exchanged. If this is an apache server process then the scope is narrowed, but perhaps there is some intervening software.

    Is your script a standard CGI script? If it is, it will be read from disk and executed once for every request handled by the server. You might verify this by having your script log its start time, process ID, version and the full path and modification time of the file loaded. Log this every time your script handles a request. You can then correlate these logs with the requests to confirm that your script is running and producing the results you are seeing, and see exactly what is running each time. If you have plain CGI, you should see a different process ID each time and the version and modification times of your script should always be the latest. Your evidence suggests you will see something else. If you see the same process ID for several requests, then you should investigate what that process is, how it comes to be handling multiple CGI requests and how it is handling your script.

    The mod_perl module was mentioned in a previous post. This and others can cause your script to be loaded and kept in memory, effectively becoming a subroutine that is executed over and over for each request, rather than running your script from disk for each request. This is caching of a different sort and seems a likely explanation for the behavior you have described. HTTP headers and meta tags in the produced HTML will not affect this sort of caching.

      It looks like mod_perl is installed...When reloading apache, I see:
      "Apache/2.2.9 (Debian) DAV/2 SVN/1.5.1 PHP/5.2.6-1+lenny8 with Suhosin +-Patch mod_ssl/2.2.9 OpenSSL/0.9.8g mod_perl/2.0.4 Perl/v5.10.0 confi +gured -- resuming normal operations"
      in the logs.

      Of course I know nothing about mod_perl and very little about how the server is configured. I've inherited the server from a previous co-worker.

      I'm going to try to do a little research on mod_perl, but in the meantime, is there anything I should look for if this is the problem? i.e. configuration?

        Mod_perl improves performance by loading the perl interpreter and your program once and then keeping these in memory and using them to respond to several requests. Configuration parameters determine how many requests each process or thread handles.

        If your server has a pool of processes/threads handling requests, each started at a different time, then each might have a different version of your program loaded. When a new request arrives at the server, it will be passed to one of the available processes/threads. If the server is busy, which process/thread handles the next request will be quite random. Thus, it will sometimes be handled by a process/thread running a recent version of your program and sometimes by one running an old version.

        If you stop the server, all processes/threads will be stopped. When you restart it, all new processes/threads will load the then current version of your program. Thus a full shut down and restart of the apache server should solve your problem.

        If your users can't tolerate the service disruption, then you can do a "graceful" restart. This allows current processes/threads to finish handling their current requests but then they stop and new processes/threads are started and these will load the current version of your program.

Re^3: apache/perl caching problem
by Anonymous Monk on Apr 23, 2010 at 20:37 UTC
    mod_perl?