Re: opcode caching / speed-up app
by moritz (Cardinal) on Sep 14, 2013 at 11:20 UTC
|
The perlperf manual page has some advice.
In my opinion, the most important part is profile first, so that you know where the slow parts are. Devel::NYTProf is the best profiler I've ever used, and I can highly recommend it.
If your application is backed by a database, it might also worth investigating if indexes might be missing. Most database servers the capability to log slow queries, which you should use and act upon.
| [reply] |
|
|
Thanks moritz. I tried NYTProf for the first time 2 days ago and it seemed pretty good. The 'database' is actually textfiles. There are 3-4 XML documents used within the application, XML::Simple opens and parses them. I appreciate the response.
| [reply] |
|
|
The 'database' is actually textfiles. There are 3-4 XML documents used within the application, XML::Simple opens and parses them. I appreciate the response.
So it sounds like you're running a CGI that gets compiled every time and also loads and parses its entire data set every time. That may well be the problem and something like SQlite could bring a big boost there. And/or, as others mentioned, FastCGI.
| [reply] |
Re: opcode caching / speed-up app
by tobyink (Canon) on Sep 14, 2013 at 14:28 UTC
|
Given that you say it's an intranet app, I'll assume it's web-based. There's essentially two ways Perl can power a web-based app: via CGI; or via some sort of persistent mechanism (FastCGI, mod_perl, Perl-native web servers).
If you're using a persistent mechanism, a single Perl process (and thus a single compilation phase) serves multiple (potentially millions of) requests. So speeding up the compilation phase serves no purpose at all.
If you're not using a persistent mechanism, then a new Perl process is spawned for every request, which has to parse and compile your script, along with all the modules you're using. In this case an opcode cache would give you some extra performance; however switching to a persistent mechanism would give you a far bigger boost.
use Moops; class Cow :rw { has name => (default => 'Ermintrude') }; say Cow->new->name
| [reply] |
Re: opcode caching / speed-up app
by TJPride (Pilgrim) on Sep 14, 2013 at 18:53 UTC
|
Sounds to me like the major issue is probably the XML parsing. Depending on what you're doing, you may want to switch to a relational database. Alternately, you can just add more servers and distribute the load.
Is this a secret or can you link us to the actual app so we can get a better idea of where the hang-up may be and how you can improve things? | [reply] |
|
|
Sorry, I won't be able to provide a link to it. I appreciate the advice. I'll look into more into profiling but I'm sure some sort of implementation of something like FastCGI will be needed as a point. Would you happen to have a good (simplified) link on how to use FastCGI? The apps are tested and used on Windows and later installed on both Windows and Linux. Thanks again.
| [reply] |
|
|
The work I do is more low-volume, high-processor utility runs, and in the instances where I have to deal with high-volume traffic, I use PHP instead (yes, this is a Perl site, but both languages have their strengths and weaknesses, imho). Someone more pure Perl will have to supply you with info on FastCGI.
| [reply] |
Re: opcode caching
by ww (Archbishop) on Sep 14, 2013 at 11:30 UTC
|
| [reply] |