Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

RE: Schwartzian Transform vs. plain Perl

by jjhorner (Hermit)
on Jun 08, 2000 at 16:44 UTC ( [id://17065]=note: print w/replies, xml ) Need Help??


in reply to Schwartzian Transform vs. plain Perl

I usually take my variable assignment out of the subroutines.

I took the hash generation:

map(($_,1), (1..10000));

out of the subroutines and here is what I got (I had to up the iterations to 100000!):

[08:35:03 jhorner@gateway scripts]$ ./20000608-2.pl Benchmark: timing 100000 iterations of a, b... a: 2 wallclock secs ( 1.49 usr + 0.00 sys = 1.49 CPU) @ 67 +114.09/s (n=100000) b: 1 wallclock secs ( 1.64 usr + 0.00 sys = 1.64 CPU) @ 60 +975.61/s (n=100000) [08:35:16 jhorner@gateway scripts]$

Does anyone see any benefit to forcing the hash generation each iteration, or does one declaration work just as well?

So, to support your theory, I got roughly the same times. To make sure, I upped it to 1000000 iterations:

Benchmark: timing 1000000 iterations of a, b... a: 15 wallclock secs (14.67 usr + 0.01 sys = 14.68 CPU) @ 68 +119.89/s (n=1000000) b: 16 wallclock secs (16.41 usr + 0.01 sys = 16.42 CPU) @ 60 +901.34/s (n=1000000)

I believe, and correct me if I'm wrong, the length of values involved makes it less cpu intensive to do the dereferences and indexing. Perhaps if we tried it on larger key->value pairs, we would see more correct results.

The _EPP_ Book where I see the Schwartzian Transform, it also notes that the best part of the Schwartzian Transform "is that it tends to be the fastest way to perform complicated sorts".

J. J. Horner
Linux, Perl, Apache, Stronghold, Unix
jhorner@knoxlug.org http://www.knoxlug.org/

Replies are listed 'Best First'.
RE: RE: Schwartzian Transform vs. plain Perl
by mikfire (Deacon) on Jun 08, 2000 at 17:50 UTC
    As long as both functions are doing the hash generation, my first estimation is that it won't change the relation between the methods. I could go through lots of mathematical gyrations, but I won't.

    Thinking a bit more about it, the first function to run may get more penalized than the second, depending on how aggressive perl is in reusing memory. The first call to initialize the hash will have to allocate memory from the system. After the my'd hash goes out of scope, the memory is marked as free but perl doesn't give it back to the system. The next time the hash is allocated, perl may ( again, depending on how aggressive perl is ) just give it the same space. This should be faster than trying to malloc the same amount of space.

    Given 100,000 iterations of each loop I still do not think the penalty is going to be large enough to introduce any skew.

    Personally, I also try to do that stuff outside of the timing loop - I want to remove as many distractions as possible and make sure I am timing how fast the sort is, not how fast my machine can allocate memory or how effectively perl is reusing it.

    mikfire

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://17065]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others imbibing at the Monastery: (4)
As of 2024-04-19 03:38 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found