That certainly helps, though I would still like to see numbers!
My guess as to why noone has written a module for that is that it is of no great concern.
Example, I have a generated perl file that basically was created to optimise a very heavily nested set of loops (over 100), by unrolling those loops.
It is just under 4MB in size:
07/06/2018 01:44 3,959,242 gened.pl
1 File(s) 3,959,242 bytes
And consists of just over 50kloc: C:\test>wc -l gened.pl
50405 gened.pl
When run, it executes 2.3 million lines: C:\test>perl -d:Trace gened.pl >log 2>&1
C:\test>dir log
07/06/2018 01:49 2,334,425 log
1 File(s) 2,334,425 bytes
I just tacked these two line to the bottom of the file: sub mem2{ `tasklist /nh /fi "PID eq $$"` =~ m[(\S+ K)$]; }
print mem2;
And when run, it produces: [ 1:52:14.81] C:\test>tail gened.pl
X4: L4( $s, $e, $p );
X3: L3( $s, $e, $p );
X2: L2( $s, $e, $p );
X1: L1( $s, $e, $p );
}
Xsub( 99, 88, 77 );
sub mem2{ `tasklist /nh /fi "PID eq $$"` =~ m[(\S+ K)$]; }
print mem2;
[ 1:53:47.89] C:\test>gened
187,124 K
So 50kloc of code compiles to <200MB of memory.
Unless you're using Moose, the size of the code usually pales in significance with respect to the data. And the great benefit of properly optimised debug blocks is the removal of the condition test within inner loops and the runtime it saves.
Not what you want to hear, but it might set your mind at rest about a few lines of debug.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
In the absence of evidence, opinion is indistinguishable from prejudice.
Suck that fhit
|