in reply to Re: Using Devel::Profile output to know what to do next
in thread Using Devel::Profile output to know what to do next

First time using profiling. Can you say more? All I did was
perl -d:Profile test.pl
using Devel::Profile. What would be better?

Replies are listed 'Best First'.
Re^3: Using Devel::Profile output to know what to do next
by graff (Chancellor) on Jun 08, 2005 at 03:25 UTC
    If instead of using "Devel::Profile" (which is a non-core CPAN module), you used "Devel::DProf" (which is a core module included with every perl installation), you would be going through an extra step, after running your CDBI app, to run the separate utility "dprofpp" (also included with every perl installation), in order to see the pretty listing of summary statistics for all the sub calls. It's when you run "dprofpp" ("pretty-print") that you can use command line options to specify how to sort the entries.

    Anyway, you say it's a big app with CDBI all through it, but you are considering looking at a straight DBI approach to see if that will speed things up. Looking at your numbers, my initial guess is that going to straight DBI, and managing the layout of your tables with your own data structures (rather than relying on the CDBI OO stuff to do this for you) is likely to cut down noticeably on the overall runtime...

    (... unless of course you happen to do a really poor job of factoring out the CDBI stuff.)

    I've never used CDBI myself, so I don't have a feel for its relative runtime-overhead vs. programmer-efficiency trade-offs. Maybe it's the kind of thing that makes great sense as a prototyping tool, or as a production solution for jobs that don't involve really heavy loads (e.g. having fewer rows and columns involved than you do).

    Still, it's up to you to figure out whether you think a rewrite without CDBI is going to be worthwhile, because we don't know how complicated your app is, or what other constraints there are.

    If you need some empirical evidence before doing a major rewrite, then maybe a worthwhile test to try would be a very simple app that will go through the same amount of database content, with one or more queries that at least come close to what the real app is doing, but doesn't do much else -- it just has to be simple and quick to write both with and without CDBI, so you can benchmark the two approaches.

    (I'll bet that with queries returning 200K rows and 10 columns per row from your database, you'll see a big difference when you take away the 2 million calls to that bunch of CDBI functions.)

      I thought we were talking about Devel::Profiler. It uses dprofpp to display results.
Re^3: Using Devel::Profile output to know what to do next
by perrin (Chancellor) on Jun 08, 2005 at 03:50 UTC
    It looks like you picked the wrong profiler. You should use either Devel::DProf or Devel::Profiler, not Devel::Profile. The problem with Devel::Profile is that it measure the CPU time, not the "real" time that has elapsed. Waiting hours for a database query might not take much CPU time at all. If you use a profiler like the two I mentioned which lets you sort by wall ("wall clock" or real time), you will see how much time was really spent waiting for your queries to execute.
      no, Devel::Profile measures wall time, not cpu time.