in reply to Re: Gathering data on Profiling woes
in thread Gathering data on Profiling woes

(I work with dws)

> The first is measuring CPU time in an application that is I/O bound.

*nod* Our default quickie-profiling command defaults to wall-time instead of cpu-time:
perlp is a function perlp () { perl -d:DProf "$@"; dprofpp -r -O 30 >"${1}.prof"; cat "${1}.prof" }
> try[ing] to profile mod_perl apps but load their code before initializing the debugger.

We can't even profile our code which gets run under mod_perl in a non-mod_perl context (profiling it while under execution by one of our unit tests, for instance, or while driven by a targeted one-off script).

Replies are listed 'Best First'.
Re^3: Gathering data on Profiling woes
by perrin (Chancellor) on Nov 02, 2005 at 04:42 UTC
    Not sure what you're getting at in that last bit. You don't need to run your mod_perl code in a non-mod_perl enviroment to profile it. That isn't really related to not init-ing the debugger under mod_perl when profiling though, which is a very common mistake.

      You don't need to run your mod_perl code in a non-mod_perl enviroment to profile it.

      We have a testing framework that lets us run our handler code via ordinary .t files. This makes it convient for us do isolated functional, unit, and performance tests. The latter are known to not reflect deployed reality, but we have other ways of getting data there. Our immediate need is to get profiling working reliably on our .t files. I have my eye on code coverage, but that's a step farther out.