in reply to Re: Unfortunately benchmarking things with $& isn't easy...
in thread Obtaining server name from UNC path
Hmm, perhaps. The way I was looking at it is that its a * k / b * k (which im not arguing is correct, as i dont know, merely explaining :-). Where if the k=1 when using eval and k=1.1 using a subref the ratio stays the same. Maybe this isnt the correct analysis, if so please enlighten me, but please without the "stupid" bit, im well well aware of my own limitations. :-)
If the ratios of two algorithms would vary wildly from machine to machine...
I have to admit that I assumed you meant the rate per second. Now that I see what you mean I concede my point is not correct.
Right figure, wrong conclusion. The slowdown isn't mainly caused by the use of $&, but the copying involved.
Wow. You are sooo right. If you look closely at my redo of theorbtwos code and the code in bm_theorbtwo.pl the assignment that is present is responsible for the difference. When I made sure that the combined version and the seperate version were _exactly_ the same the results were comparable. Thanks. And good point.
the costs of $& and more so from $` and $' come if you are using other regural expression in your program, for which you don't use $& and friends. But that's not what you are benchmarking.
Actually that was what I was trying to get at, if in a somewhat oblique way. :-). Anyway, it looks to me that the presence of $& doesnt in the end have much effect on the validity of the benchmark. Which is cool and interesting. Thanks Abigail-II.
BTW, i assume
is because your shell is converting \\ to \?my $opts='-5 \\\\\\\\foo\\\\bar\\\\baz.exe';
Yves / DeMerphq
---
Software Engineering is Programming when you can't. -- E. W. Dijkstra (RIP)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Unfortunately benchmarking things with $& isn't easy...
by Abigail-II (Bishop) on Aug 08, 2002 at 16:04 UTC |