BUU has asked for the wisdom of the Perl Monks concerning the following question:

Basically, I run dprofpp -u -p index.pl, and get a random set of functions every time. So one run looks like:
%Time ExclSec CumulS #Calls sec/call Csec/c Name 25.1 0.020 0.070 6 0.0033 0.0116 main::BEGIN 12.5 0.010 0.010 17 0.0006 0.0006 strict::bits 12.5 0.010 0.010 24 0.0004 0.0004 Carp::caller_info 12.5 0.010 0.010 3 0.0033 0.0033 QueryParse::BEGIN 12.5 0.010 0.020 4 0.0025 0.0050 Storable::BEGIN 12.5 0.010 0.010 4 0.0025 0.0025 File::Spec::Win32::BEGIN 0.00 0.000 0.010 13 0.0000 0.0008 strict::import 0.00 0.000 0.010 3 0.0000 0.0033 CGI::Carp::BEGIN 0.00 0.000 -0.000 11 0.0000 - Exporter::import 0.00 0.000 -0.000 1 0.0000 - File::Spec::BEGIN 0.00 0.000 -0.000 5 0.0000 - Cwd::BEGIN 0.00 0.000 -0.000 1 0.0000 - base::BEGIN 0.00 0.000 -0.000 1 0.0000 - base::import 0.00 0.000 -0.000 1 0.0000 - UNIVERSAL::isa 0.00 0.000 -0.000 1 0.0000 - warnings::BEGIN
While the next one looks like:
%Time ExclSec CumulS #Calls sec/call Csec/c Name 14.3 0.010 0.010 3 0.0033 0.0033 QueryParse::BEGIN 14.3 0.010 0.010 1 0.0100 0.0100 Exporter::as_heavy 14.3 0.010 0.010 5 0.0020 0.0020 Cwd::BEGIN 14.3 0.010 0.010 2 0.0050 0.0050 lib::BEGIN 14.3 0.010 0.010 2 0.0050 0.0050 Carp::longmess 14.3 0.010 0.020 4 0.0025 0.0050 File::Spec::Win32::BEGIN 0.00 0.000 0.060 6 0.0000 0.0100 main::BEGIN 0.00 0.000 -0.000 13 0.0000 - strict::import 0.00 0.000 -0.000 17 0.0000 - strict::bits 0.00 0.000 0.020 3 0.0000 0.0066 CGI::Carp::BEGIN 0.00 0.000 0.010 11 0.0000 0.0009 Exporter::import 0.00 0.000 -0.000 1 0.0000 - File::Spec::BEGIN 0.00 0.000 -0.000 1 0.0000 - base::BEGIN 0.00 0.000 -0.000 1 0.0000 - base::import 0.00 0.000 -0.000 1 0.0000 - UNIVERSAL::isa

I'm assuming that this is because it's not spending enough time in any particular subroutine to really get accurate data, so the top times is more or less random, depending on what else the processor is doing whenever it hits a particular sub?

Replies are listed 'Best First'.
Re: Dprofpp, random subroutines
by BrowserUk (Patriarch) on Oct 07, 2003 at 08:57 UTC

    With a total runtime of 6/100ths of a second, I'd pat myself on the back if I were you. Your optimisation has paid off handsomely. A job well done!

    And they say it isn't worth the effort:)


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail

      What optimisation? This is *profiling* a script, which is one of the methods used to test if a script needs optimisation. Obviously in it's current form it doesn't really, but I would still like to be sure of exactly what is going on.

        Sorry for my sense of humour. I would say that you do know exactly what is going on. The explaination and reasoning that you gave in your original post was probably spot on.

        The differences between the routines is so small that the timings are simply not able to accurately arbitrate between them. Which order they come up in is probably more influenced by whether you moved your mouse, or received an incoming ip packet, or adjusted the volume on your pc, or even cosmic rays rather than any detectable differences in the time they took.

        If your on 5.8.1, then it would even be possible that the difference was due to a combination of the randomisation that is being applied to the hashing algorithm and the fairly recent 'stable' sorting methods.

        If, (as I assume) the data is maintained internally in a hash, and then sorted for presentation, then the stable sort will tend to leave "equal" elements in their original order, but the original order that keys are returned from a hash will vary from one run to another, hence the randomness you are seeing. Or one possible explanation for it.


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "Think for yourself!" - Abigail