Re: perl process slower and slower when loop number increase
by hippo (Archbishop) on Jan 22, 2018 at 12:14 UTC
|
Notwithstanding the timekeeping effects already raised, I can speed up the useless perl script by about a factor of 2 by avoiding the C-ish loop syntax.
$ time perl -e 'for($i=0;$i<=1e7;$i++){}'
real 0m0.648s
user 0m0.640s
sys 0m0.005s
$ time perl -e 'for(0..1e7){}'
real 0m0.354s
user 0m0.348s
sys 0m0.002s
I don't have php installed so cannot compare. YMMV | [reply] [d/l] |
Re: perl process slower and slower when loop number increase
by QM (Parson) on Jan 22, 2018 at 11:23 UTC
|
Running the tests like that mixes in the results of startup time with loop time. Use a real benchmark module to time the actual code, and time per iteration. If you want to count the startup time, do that too, with a loop around multiple startups.
There are probably other issues with this. Get some benchmarking wisdom to make sure you're not doing something awful.
But the killer is, You're not doing anything productive, just counting useless loops -- so all of the time is wasted on bookkeeping!
-QM
--
Quantum Mechanics: The dreams stuff is made of
| [reply] |
|
|
"... Use a real benchmark module to time..."
Yes sure - this is true. But i wonder what a serious way might be to get robust results in this case.
Just an idea: What about IPC::Run? Building the harnesses for each command (Perl, PHP) and then calling run for each harness with Benchmark?
Best regards, Karl
«The Crux of the Biscuit is the Apostrophe»
perl -MCrypt::CBC -E 'say Crypt::CBC->new(-key=>'kgb',-cipher=>"Blowfish")->decrypt_hex($ENV{KARL});'Help
| [reply] [d/l] [select] |
|
|
you can count startup time like this, just do nothing:
$ time perl -e ''
real 0m0.003s
user 0m0.000s
sys 0m0.000s
$ time php -r ''
real 0m0.023s
user 0m0.020s
sys 0m0.000s
perl always have faster startup time
but if using $i = 100000000 where
perl 7secs vs PHP 1.6secs,
then the problem is not startup time
the "problem" is on perl loop, or there limit of the loop number
that perl "want" to handle, thats why,
I try to ask how to set the loop limit.
(maybe system limit or some ENV)
note: this varian loop have same result
$ time perl -e 'for($i=0;$i<=1000;$i++){for($j=0;$j<=1000;$j++){for($k
+=0;$k<=100;$k++){}}}'
real 0m7.867s
user 0m7.860s
sys 0m0.000s
$ time php -r 'for($i=0;$i<=1000;$i++){for($j=0;$j<=1000;$j++){for($k=
+0;$k<=100;$k++){}}}'
real 0m1.717s
user 0m1.712s
sys 0m0.000s
in my vps, seem the limit is about 100k, beyond this, perl start slowing
note: if we not include startup time at above code,
at 100k PHP and Perl have almost similiar performance
$ time perl -e 'for($i=0;$i<=100000;$i++){}'
real 0m0.010s
user 0m0.008s
sys 0m0.000s
$ time php -r 'for($i=0;$i<=100000;$i++){}'
real 0m0.030s
user 0m0.016s
sys 0m0.012s
| [reply] [d/l] [select] |
|
|
for ($i=0; $i<1; $i++); is a better null statement because the parsing time would be exactly the same. And running 1 iteration of a loop (or 100 for that matter) is negligible.
| [reply] |
Re: perl process slower and slower when loop number increase
by choroba (Cardinal) on Jan 22, 2018 at 15:27 UTC
|
Do you have gnuplot installed? Feel free to play with the following:
To do something in the loop, the code calculates the sum of all the numbers it loops over (the sum isn't used anywhere, so clever optimizers might still optimize it away).
You might need to remove some of the languages from the %run hash if you don't have them installed, you can also add some if you like. Compiled languages need an entry in the %prepare hash, too.
It includes the startup time and the compilation time, but you can easily exclude the latter as written in the comment.
Output on my machine: Image
($q=q:Sq=~/;[c](.)(.)/;chr(-||-|5+lengthSq)`"S|oS2"`map{chr |+ord
}map{substrSq`S_+|`|}3E|-|`7**2-3:)=~y+S|`+$1,++print+eval$q,q,a,
| [reply] [d/l] [select] |
|
|
wow, thx, after looking your graph, it is more clear,
after 1x10e6, perl loop (and others) start slowing down, but not as extreme as perl and ruby
in my plot, php (v7.1) start slowing down after 1x10e7
it is already close (i guest), so the question is, what is 1x10e6 (1MB) number is ? buffer? cache? or what ?
and maybe, is there a way we can increase this limit ?
| [reply] |
Re: perl process slower and slower when loop number increase
by Dallaylaen (Chaplain) on Jan 22, 2018 at 12:02 UTC
|
The total execution time is startup_time + count * iteration_time. For numbers like 10..10000 the loop time is negligible compared to reading the interpreter from disk, parsing script etc.
You can eliminate the startup time (assuming it's constant) by running several different big (10**6+) loops, then plotting time against count:
bash$ for i in `seq 10`; do echo $i; time perl -e 'for($i=0;$i<='$i'*1
+000000 ;$i++){}'; done
1
real 0m0.055s
user 0m0.048s
sys 0m0.000s
2
real 0m0.051s
user 0m0.048s
sys 0m0.000s
3
real 0m0.074s
user 0m0.068s
sys 0m0.000s
4
real 0m0.099s
user 0m0.088s
sys 0m0.004s
5
real 0m0.124s
user 0m0.116s
sys 0m0.000s
6
real 0m0.146s
user 0m0.140s
sys 0m0.000s
7
real 0m0.166s
user 0m0.164s
sys 0m0.000s
8
real 0m0.197s
user 0m0.184s
sys 0m0.000s
9
real 0m0.217s
user 0m0.204s
sys 0m0.004s
10
real 0m0.235s
user 0m0.232s
sys 0m0.000s
bash$ for i in `seq 10`; do echo $i; time php -r 'for($i=0;$i<='$i'*10
+00000 ;$i++){}'; done
1
real 0m0.034s
user 0m0.020s
sys 0m0.012s
2
real 0m0.036s
user 0m0.028s
sys 0m0.000s
3
real 0m0.035s
user 0m0.032s
sys 0m0.000s
4
real 0m0.040s
user 0m0.036s
sys 0m0.000s
5
real 0m0.048s
user 0m0.040s
sys 0m0.004s
6
real 0m0.055s
user 0m0.052s
sys 0m0.000s
7
real 0m0.065s
user 0m0.060s
sys 0m0.000s
8
real 0m0.071s
user 0m0.068s
sys 0m0.000s
9
real 0m0.080s
user 0m0.076s
sys 0m0.000s
10
real 0m0.087s
user 0m0.084s
sys 0m0.000s
What surprises me here is that PHP is much faster than Perl. | [reply] [d/l] |
|
|
| [reply] |
|
|
| [reply] [d/l] |
|
|
not just php,
xxxx vs perl have same behaviour.
this loop test is maybe "useless" and "doing nothing"
but why only perl slowing down for "do nothing" until "certain number",
and for "real useful app", try to parse 1M line of single file,
perl already faster from other for read, match with regex, text processing, etc,
but this "loop number" is the problem
the question is still: can that "certain number" be set ?
try:
time node -e 'for($i=0;$i<=1000000;$i++){}'
or use java, c, etc
for "doing nothing" they not slowing down, only perl does
| [reply] [d/l] |
|
|
| [reply] [d/l] |
|
|
|
|
|
|
Re: perl process slower and slower when loop number increase
by davido (Cardinal) on Jan 22, 2018 at 15:40 UTC
|
I cannot reproduce results so dramatic as yours. While Perl does come out slower in this test, it is not over four times slower, as your example is producing:
$ time perl -e 'for ($x=0; $x!=100_000_000; ++$x) {}'
real 0m2.448s
user 0m2.446s
sys 0m0.002s
$ time php -r 'for($x=0; $x!=100000000; ++$x){};'
real 0m1.570s
user 0m1.562s
sys 0m0.008s
Looking at the output of B::Concise does not reveal any smoking gun in the case of the Perl code (I don't know how to deparse PHP similarly). It would be interesting to see the output of the following command on your system that is producing a 4.5x performance penalty for Perl:
$ perl -MO=Concise -e 'for ($x=0; $x!=100_000_000; ++$x) {}'
h <@> leave[1 ref] vKP/REFC ->(end)
1 <0> enter ->2
2 <;> nextstate(main 1 -e:1) v:{ ->3
5 <2> sassign vKS/2 ->6
3 <$> const(IV 0) s ->4
- <1> ex-rv2sv sKRM*/1 ->5
4 <$> gvsv(*x) s ->5
6 <0> unstack v* ->7
g <2> leaveloop vK/2 ->h
7 <{> enterloop(next->9 last->g redo->8) v ->c
- <1> null vK/1 ->g
f <|> and(other->8) vK/1 ->g
e <2> ne sK/2 ->f
- <1> ex-rv2sv sK/1 ->d
c <$> gvsv(*x) s ->d
d <$> const(IV 100000000) s ->e
- <@> lineseq vK ->-
- <@> scope vK ->9
8 <0> stub v ->9
a <1> preinc vK/1 ->b
- <1> ex-rv2sv sKRM/1 ->a
9 <$> gvsv(*x) s ->a
b <0> unstack v ->c
-e syntax OK
If I had to guess, with my terrible understanding of PHP, I'd wonder if the empty block was optimized away in PHP, eliminating the repeated creation and tear-down of a scope. But at minimum it might be revealing to see what your version of Perl is doing inside the loop.
| [reply] [d/l] [select] |
|
|
~$ uname -a
Linux xxxxxxxxx 4.4.0-102-generic #125-Ubuntu SMP Tue Nov 21 15:13:42
+UTC 2017 i686 i686 i686 GNU/Linux
$ perl -v
This is perl 5, version 22, subversion 1 (v5.22.1) built for i686-linu
+x-gnu-thread-multi-64int
(with 60 registered patches, see perl -V for more detail)
$ perl -MO=Concise -e 'for ($x=0; $x!=100_000_000; ++$x) {}'
h <@> leave[1 ref] vKP/REFC ->(end)
1 <0> enter ->2
2 <;> nextstate(main 1 -e:1) v:{ ->3
5 <2> sassign vKS/2 ->6
3 <$> const[IV 0] s ->4
- <1> ex-rv2sv sKRM*/1 ->5
4 <#> gvsv[*x] s ->5
6 <0> unstack v* ->7
g <2> leaveloop vK/2 ->h
7 <{> enterloop(next->9 last->g redo->8) v ->c
- <1> null vK/1 ->g
f <|> and(other->8) vK/1 ->g
e <2> ne sK/2 ->f
- <1> ex-rv2sv sK/1 ->d
c <#> gvsv[*x] s ->d
d <$> const[IV 100000000] s ->e
- <@> lineseq vK ->-
- <@> scope vK ->9
8 <0> stub v ->9
a <1> preinc vK/1 ->b
- <1> ex-rv2sv sKRM/1 ->a
9 <#> gvsv[*x] s ->a
b <0> unstack v ->c
| [reply] [d/l] |
|
|
>> If I had to guess, with my terrible understanding of PHP, I'd wonder if the empty block was optimized away in PHP
in first post I mention about "alioth benchmark game".
see in
perl vs php
https://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=perl&lang2=php
or
perl vs java
https://benchmarksgame.alioth.debian.org/u64q/compare.php?lang=perl&lang2=php
it's maybe just "toy" or "game",
but look at 1 example 'n-body' test, perl and php have similiar logic,
and what make perl slow I thing the test need 50_000_000 loop and
more loop in body of the test
| [reply] |
|
|
Ok lets step back a bit. why are you surprised that php does better than perl in some benchmarks?
Dave.
| [reply] |
|
|
|
|
|
Re: perl process slower and slower when loop number increase
by ikegami (Patriarch) on Jan 22, 2018 at 17:54 UTC
|
Note that for(my $i=0;$i<=100000000;$i++) is a needless complex version of for my $i (0..100000000).
| [reply] [d/l] [select] |
Re: perl process slower and slower when loop number increase
by Anonymous Monk on Jan 22, 2018 at 14:48 UTC
|
As a good book once said, "Don't diddle code to make it faster – find a better algorithm." Given the speed of today's CPUs you probably can't create a compute-bound process that will differ substantially in execution time based on a thing like "PHP vs. Perl vs. Ruby vs. ..." If the program is too slow, find a better algorithm. It's just that simple. | [reply] |
|
|
| [reply] |