in reply to Gathering data on Profiling woes

I can tell you two common mistakes people make when profiling. The first is measuring CPU time in an application that is I/O bound. Devel::DProf shows CPU time by default, and that's pretty much useless for an application that uses a database or does other slow activities that are light on CPU.

The other problem I see is when people try to profile mod_perl apps but load their code before initializing the debugger. If you load code during startup in mod_perl, you have to start the debugger first, because the profiler uses the debug hooks. If you don't do that, you only get back info on the code loaded later, and it looks wrong and useless. (This is all described in the mod_perl docs.)

Replies are listed 'Best First'.
Re^2: Gathering data on Profiling woes
by darrellb (Acolyte) on Nov 02, 2005 at 00:50 UTC
    (I work with dws)

    > The first is measuring CPU time in an application that is I/O bound.

    *nod* Our default quickie-profiling command defaults to wall-time instead of cpu-time:
    perlp is a function perlp () { perl -d:DProf "$@"; dprofpp -r -O 30 >"${1}.prof"; cat "${1}.prof" }
    > try[ing] to profile mod_perl apps but load their code before initializing the debugger.

    We can't even profile our code which gets run under mod_perl in a non-mod_perl context (profiling it while under execution by one of our unit tests, for instance, or while driven by a targeted one-off script).
      Not sure what you're getting at in that last bit. You don't need to run your mod_perl code in a non-mod_perl enviroment to profile it. That isn't really related to not init-ing the debugger under mod_perl when profiling though, which is a very common mistake.

        You don't need to run your mod_perl code in a non-mod_perl enviroment to profile it.

        We have a testing framework that lets us run our handler code via ordinary .t files. This makes it convient for us do isolated functional, unit, and performance tests. The latter are known to not reflect deployed reality, but we have other ways of getting data there. Our immediate need is to get profiling working reliably on our .t files. I have my eye on code coverage, but that's a step farther out.