http://qs1969.pair.com?node_id=256457


in reply to Speeding up commercial Web applications

I doubt you'd see a huge speedup by hacking the code to eliminate globals and uses of local where my captures what was intended. I also doubt that there would be such a thing as a smooth upgrade from this product to one that adheres more to the Perl5 or (by the time it comes out?) Perl6 way of doing things that didn't simply preserve or transform the underlying data. The code involved in an upgrade is likely to be all new, assuming it provides performance boost of the order you're seeking. As to the mod_perl approaches, they won't likely help unless starting the Perl interpreter is the problem (if it takes 1 second to start the interpreter, and 8 seconds to gather and organize the data, and another second to generate a page now, under PerlRun it might go down to .2 seconds startup time, 8 seconds to collate, and a second to generate the page 1. The best thing you could do is profile your app to see where it's spending the most time to get an idea of where you might start optimizing. There is no point spending several programmer-weeks optimizing a little routine that accounts for .02% of the time the script runs (be sure to point out that time you spend on this is time not spent on other stuff -- that must be taken into account when weighing any of the options here). Search for "profiling" on CPAN; you might start by checking out Devel::DProf although I'm sure there are profiling gurus around here who can help you make a good choice.

Best, and this is hard to say much about in a vacuum, is to look for places where it's manifestly inefficient, although this is likely to be hard to track down. If you can, get the system to cache frequently accessed or hard to calculate data / generated pages. If it's doing the same beast of a computation for each request, and the results of that computation change relatively slowly and are valid for a wide range of users, then a cache will really speed it up. Maybe your solution could be as simple as putting the web application behind a caching proxy, once you figure out how to cache to best match your desire for up-to-date personalized data against performance concerns.

As to inefficient algorithms, a really obvious case would be a subroutine that searches an array that's called a lot which looks like

# ok, yeah, this is perl5ey sub in_array { my $foo = shift; return grep $_ eq $foo, @global_array; }
but I think it's unlikely you'll find too many places where (a) such a thing exists and (b) such that you could fix it without really modifying the code.

In the end, the thing to do may be to upgrade the software and spend your time figuring out how to migrate the data.

1 The actual numbers are bogus, but the point is I hope clear enough.