Re: looping efficiency
by Your Mother (Archbishop) on Dec 30, 2020 at 00:22 UTC
|
perl -e 'printf "%04d\n", $_ for 0 .. 9999'
Update, swapped the 1 for a 0. | [reply] [d/l] [select] |
Re: looping efficiency
by Marshall (Canon) on Dec 30, 2020 at 00:54 UTC
|
You can do it like this:
A string range can be used instead of numeric values.
use strict;
use warnings;
foreach my $num_string ('0000'...'9999')
{
my($i,$j,$k,$l) = split (//,$num_string);
print "i=$i j=$j k=$k l=$l\n";
}
Abbreviated printout
i=0 j=0 k=0 l=0
i=0 j=0 k=0 l=1
i=0 j=0 k=0 l=2
i=0 j=0 k=0 l=3
i=0 j=0 k=0 l=4
i=0 j=0 k=0 l=5
i=0 j=0 k=0 l=6
i=0 j=0 k=0 l=7
.....
i=9 j=9 k=9 l=3
i=9 j=9 k=9 l=4
i=9 j=9 k=9 l=5
i=9 j=9 k=9 l=6
i=9 j=9 k=9 l=7
i=9 j=9 k=9 l=8
i=9 j=9 k=9 l=9
| [reply] [d/l] [select] |
|
my($i,$j,$k,$l) = split (//,$num_string);
I haven't done any Benchmark-ing (and won't), but
unpack might be faster than split:
my($i,$j,$k,$l) = unpack '(a)4', $num_string;
Give a man a fish: <%-{-{-{-<
| [reply] [d/l] [select] |
|
Yes, I also suspect that unpack is faster. However, I have doubts that would make a difference. Of more importance perhaps is why these individual digits need to be known in the first place? I have no idea. Maybe the OP can clue us in to what this exercise is for?
| [reply] |
|
Considering it's Perl, split vs unpack seems irrelevant.
-QM
--
Quantum Mechanics: The dreams stuff is made of
| [reply] [d/l] [select] |
Re: looping efficiency
by AnomalousMonk (Archbishop) on Dec 30, 2020 at 00:42 UTC
|
I completely agree with Your Mother's point.
Not only are nested loops very inefficient for this, they are
error-prone and subject to misunderstanding.
The only justification for nested loops I can think of is if you
need to something with the individual $i $j $k $l digits.
Even in this case, I think it's going to be more efficient and clear
to form a number like '1234' and extract out the individual
digits you need. What are you really trying to do with
"$i$ij$k$l"? (Note that you only get leading zeros if you stringize the number in some way.)
Give a man a fish: <%-{-{-{-<
| [reply] [d/l] [select] |
|
Thanks (to everyone) for the replies. I actually work with sequentially numbered files (so I do stringize the number). I had seen claims that formatting strings reperesented a loss of efficiency (and the particular loops in my snippet appear relatively error-resistant), but of course, everything remains relative to possible alternatives.
| [reply] |
|
Don’t Optimize Code -- Benchmark It
-- from Ten Essential Development Practices by Damian Conway
The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times;
premature optimization is the root of all evil (or at least most of it) in programming
-- Donald Knuth
Without good design, good algorithms, and complete understanding of the
program's operation, your carefully optimized code will amount to one of
mankind's least fruitful creations -- a fast slow program.
-- Michael Abrash
Wow, I assumed they were complaining about clarity rather than efficiency! :)
For cheap thrills, assuming all you need are the file names 0000 .. 9999 in order,
I benchmarked the three offered solutions as shown below.
use strict;
use warnings;
use Benchmark qw(timethese);
sub orig {
for my $i ( 0 .. 9 ) {
for my $j ( 0 .. 9 ) {
for my $k ( 0 .. 9 ) {
for my $l ( 0 .. 9 ) {
my $filename = "$i$j$k$l";
}
}
}
}
}
sub yourmum {
for ( 0 .. 9999 ) {
my $filename = sprintf "%04d", $_;
}
}
sub marshall {
for ('0000' ... '9999') {
my $filename = $_;
}
}
orig();
yourmum();
marshall();
timethese 50000, {
Orig => sub { orig() },
YourMum => sub { yourmum() },
Marshall => sub { marshall() },
};
On my machine, the results were:
Benchmark: timing 50000 iterations of Marshall, Orig, YourMum...
Marshall: 25 wallclock secs (25.16 usr + 0.00 sys = 25.16 CPU) @ 19
+87.52/s (n=50000)
Orig: 39 wallclock secs (38.83 usr + 0.00 sys = 38.83 CPU) @ 12
+87.73/s (n=50000)
YourMum: 40 wallclock secs (40.08 usr + 0.00 sys = 40.08 CPU) @ 12
+47.57/s (n=50000)
If all you need are the file names (not the individual digits), it's no surprise that
Marshall's suggestion was the fastest. I also think it is the simplest and clearest
if all you need are the file names.
Update: see also perlperf - Perl Performance and Optimization Techniques. Added this node to Performance References.
| [reply] [d/l] [select] |
|
|
|
|
|
|
|
main::(-e:1): 0
DB<1> $_='0000'
DB<2> p $_++
0000
DB<3> p $_++
0001
DB<4> p $_++
0002
DB<5> $_='abcd'
DB<6> p $_++
abcd
DB<7> p $_++
abce
DB<8> p $_++
abcf
| [reply] [d/l] |
|
|
|
|
I had no idea that you were going to generate 10,000 separate files!
A single SQLite DB file is likely to be far, far more efficient for whatever you are doing.
| [reply] |
|
|
|
Re: looping efficiency
by GrandFather (Saint) on Dec 30, 2020 at 02:22 UTC
|
/me screams into the void: "The overhead of generating the number strings is completely irrelevant".
This is one of the plethora of guises of premature optimisation. Don't do that! Write your code so that it is clear and easy to maintain - in this case probably using sprintf. If benchmarking demonstrates that this piece of code is a significant bottleneck then it is time to think about generating something quicker. If there is any other I/O going on in the code then it is pretty much guaranteed that a simple sprintf solution is not going to contribute enough overhead to be an issue.
Optimising for fewest key strokes only makes sense transmitting to Pluto or beyond
| [reply] |
Re: looping efficiency
by choroba (Cardinal) on Dec 30, 2020 at 02:37 UTC
|
In the spirit of TIMTOWTDI: my $r = "{0,1,2,3,4,5,6,7,8,9}";
while (my $n = glob "$r$r$r$r") {
# Do something with $n.
}
map{substr$_->[0],$_->[1]||0,1}[\*||{},3],[[]],[ref qr-1,-,-1],[{}],[sub{}^*ARGV,3]
| [reply] [d/l] [select] |
Re: looping efficiency
by Your Mother (Archbishop) on Dec 30, 2020 at 02:24 UTC
|
FWIW, this is what I get by wrapping each “result” in a sub noop {}; like, noop("$i$j$k$l") and noop(sprintf "%04d", $_) to take out any IO influence.
Rate unpack split nested sprintf
unpack 102/s -- -1% -80% -84%
split 103/s 1% -- -80% -83%
nested 513/s 401% 397% -- -18%
sprintf 625/s 510% 506% 22% --
| [reply] [d/l] [select] |
Re: looping efficiency
by Don Coyote (Hermit) on Jan 01, 2021 at 15:45 UTC
|
This will print out all the combinations recursively from a hash, just not in any usual order. Perl will stringify the numbers if you print them and as each attempt starts with a zero as string, the unneccessary leading zero is trimmed just before printing.
#!/usr/bin/perl
use strict;
use warnings;
my $bikelock;
my $zodials = 4;
my $constellations = 10;
$bikelock = combicracker( {}, $zodials );
print_psychle($bikelock);
sub combicracker {
my $ref = shift;
my $zodials = shift;
$ref->{ $_ } = { } for 0 .. $constellations-1;
if( --$zodials ){
for my $key ( keys %$ref ){
combicracker( $ref->{ $key }, $zodials );
}
}
$ref;
}
sub print_psychle{
my $bref = shift;
my $sn = shift || '0';
unless( (keys %$bref)[0] ){
$sn =~ s/\A0//;
print "attempt: $sn\n" ;
return;
}
foreach my $attempt_builder ( keys %$bref ){
my $att_b2 = $sn . $attempt_builder;
print_psychle( $bref->{$attempt_builder} , $att_b2 );
}
}
attempt: 6666
attempt: 6663
attempt: 6667
attempt: 6669
attempt: 6662
attempt: 6668
attempt: 6661
attempt: 6664
attempt: 6660
attempt: 6665
attempt: 6636
attempt: 6633
attempt: 6637
attempt: 6639
attempt: 6632
attempt: 6638
attempt: 6631
attempt: 6634
...
attempt: 0073
attempt: 0077
attempt: 0079
attempt: 0072
attempt: 0078
attempt: 0071
attempt: 0074
attempt: 0070
attempt: 0075
attempt: 0096
...
attempt: 5556
attempt: 5553
attempt: 5557
attempt: 5559
attempt: 5552
attempt: 5558
attempt: 5551
attempt: 5554
attempt: 5550
attempt: 5555
This is an older version of the hash, so it returns somewhat predictably, run for run returns the same order. I think the keys nowadays are gauranteed to be at least seemingly random. Though, there is also a grouped ordering from essentially running through the keys sequentially. | [reply] [d/l] [select] |