You could try using $query->param('action') instead and see if that gives a different result.
Here's another confusing issue: I have seen and talked to many people who recommend the two common ways of doing this, as a hash, and as a list; each with their own "religion" as to what is right.
- Is there any benefit/performance gain/security for using one over the other?
- What if there are identically-named keys passed in the URI, with different values?
I've tried both, and the hash method seems to be 200µseconds faster. Nothing substantial, but over the course of 40,000 hits a day, that could be enough to have an effect.
I've managed to gut the original code down to the basics, no graphics, but the same identical structure, and the problem with the "caching" of $vars{action} seems to have subsided... but the other side effect I'm seeing now is many stale mysql processes left open. I'm explicitly doing a $dbh->disconnect;, but I don't think they're being freed.
Lastly, I managed to hook a very crude rudimentary timer using Time::HiRes into the page, so I could test the timings of refreshes. Something very curious started to occur. In some of the reloads of the same page, the page will report results as:
Elapsed time for vars was: 25.304 µsec's
While other successive reloads will produce:
Elapsed time for vars was: 5321462136156.284 µsec's
Is this time shown from one of the older httpd processes in the queue that haven't seen the data yet? The difference is so drastic, that's the only thing I can come up with. The timer I've got looks like this:
use Time::HiRes qw(gettimeofday tv_interval);
my $t0 = [gettimeofday]; # Time::HiRes (start time)
my $t1 = ''; # (end time)
# Draw page, do SQL query here
$t1 = [gettimeofday]; # end timer
my $elapsed = tv_interval ($t0);
$elapsed = $elapsed * 1000;
print p("Elapsed time for vars was: $elapsed µsec's\n");
I'm going to have to cut this down into smaller sections, and start adding one line at a time, to see where this is failing.
Boggle. |