in reply to Yet Another "Matching over a list of conditions"-like technique
Due to your manual $cnt it is surprisingly annoying to follow. Write a loop over the indices instead: for my $i ( 0 .. $#pre ). And why splice+push (both linear operations) frequently if you can just do a bit of trivial index math?
my @pre = map "../../pics/$_/", '00' .. '16'; my $offs = 0; while ( <> ) { chomp; for my $i ( 0 .. $#pre ) { local $_ = $pre[ ( $i + $offs ) % @pre ] . $_; next if not -e; print; $offs = $i; last; } }
Makeshifts last the longest.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^2: Yet Another "Matching over a list of conditions"-like technique
by blazar (Canon) on Dec 26, 2004 at 20:09 UTC | |
Due to your manual $cnt it is surprisingly annoying to follow. Write a loop over the indices instead: for my $i ( 0 .. $#pre ).I partly agree with you, which is why I voted your post ++ anyway. But isn't "surprisingly annoying" a bit excessive?!? As with lots of other things relating to readability and aesthetics in Perl (and not only!) I think that these are largely matters of personal taste. As far as I'm concerned I do not like to iterate over indices: not that i've never done it, but I tend to avoid it if possible... However I would like to stress that the point I was focusing on was rather what to do than how to exactly do it, i.e. use some trick to let the "most probable" choice be the first one to be tested. Of course this only applies to situations (like the one I had under consideration in my personal experience) in which you know a priori that this strategy is likely to be more efficient. As a side note if you want to see a yet more awkward (but IMHO still interesting) way to do it, please see the USENET article cited in the previous post! And why splice+push (both linear operations) frequently if you can just do a bit of trivial index math?Of course splice() and push() are expensive, but I use them only as needed, whereas you always do your "bit of trivial index math": all in all I do not expect my code to be less efficient than yours, nay, to be fair I expect it to be slightly more efficient. Of course we can verify this soon: I ran this on perl 5.8.6 under Linux (kernel 2.6.9) with a sample input file. This gives me: Benchmark: timing 1000 iterations of aristotle, blazar...
aristotle: 20.2232 wallclock secs (14.14 usr 6.08 sys + 0.00 cusr 0.00 csys
= 20.22 CPU) @ 49.46/s (n=1000)
blazar: 16.3845 wallclock secs ( 9.93 usr 6.45 sys + 0.00 cusr 0.00 csys
= 16.38 CPU) @ 61.05/s (n=1000)
Rate aristotle blazar
aristotle 49.5/s -- -19%
blazar 61.1/s 23% --
UPDATE: Honest of me to point out that I made a mistake preparing this benchmark. However please see a correct one at a later post of mine here below...
| [reply] [d/l] |
by Aristotle (Chancellor) on Dec 26, 2004 at 21:11 UTC | |
It's what it is. It took me a moment of staring to figure out that you were doing something much simpler than the code suggested at first glance.
Instead you iterate over elements and increment $cnt for each one. A rose by any other name… Neither do I like to, btw. I'm usually the one telling people not to. But when that's what's gotta be done, then that's what's gotta be done. (Perl6, as always, will have the solution, but alas, it's so far away yet…)
In that case I'd bubble single elements to the top. That retains memory of second-, third-, etc most likely matches based on previous data. Depending on the patterns in your data that may or may not be more, less or equally efficient.
I'm too lazy to set up a test environment to benchmark this, sorry. Depending on how much the array lookup costs it might pay to maintain the counter explicitly as you did in your code, though I can't quite believe that. Note that in your case, a chdir '../../pics/' might speed things up quite a bit if you're testing a lot of files, since stat(2) won't have to traverse the same directories over and over for each and every single file test. Makeshifts last the longest. | [reply] [d/l] [select] |
by blazar (Canon) on Dec 27, 2004 at 12:38 UTC | |
i.e. use some trick to let the "most probable" choice be the first one to be tested. In that case I'd bubble single elements to the top. That retains memory of second-, third-, etc most likely matches based on previous data. Depending on the patterns in your data that may or may not be more, less or equally efficient.<snip code> I'm too lazy to set up a test environment to benchmark this, sorry. Depending on how much the array lookup costs it might pay to maintain the counter explicitly as you did in your code, though I can't quite believe that.Well, first of all I may silently avoid to mention this detail, but... to be honest the benchmark of the previous post is not correct for, briefly speaking '../../pics/' was wrong, so that C<-e> never succeeded. I did the test again with the code attached at the end of this post. Here's what I get: UPDATE: Indeed I did some more experiments and I noticed that it is definitely so. Not to clobber too much the original text here I've added the new results below, at 'UPDATED RESULTS'. Note that in your case, a chdir '../../pics/' might speed things up quite a bit if you're testing a lot of files, since stat(2) won't have to traverse the same directories over and over for each and every single file test.I am aware of this. Of course this was originally a quick hack. The data set I'm testing this against is a realistic one and the times involved are those shown above, so that I'm not really that mad about performance. As I wrote repeatedly this was meant more of an illustrative example than the definitive word on this issue: in another situation it may apply to the frequently asked question about how to match against a list of regexen or something similar... Here's the code:
UPDATED RESULTSDue to the observation above I modified my sub, now the only difference with Aristotle's one is that I iterate over the array and maintain a counter manually (what that incidentally he considers to be surprisingly annoying to follow) whereas he iterates over indices. I also included a "naive" sub for comparison; the tested subs are now as follows:The results are:
Benchmark: timing 1000 iterations of aristotle, blazar, naive...
aristotle: 39.3629 wallclock secs (18.54 usr 20.81 sys + 0.00 cusr 0.00 csys
= 39.35 CPU) @ 25.41/s (n=1000)
blazar: 37.3122 wallclock secs (16.47 usr 20.84 sys + 0.00 cusr 0.00 csys
= 37.31 CPU) @ 26.80/s (n=1000)
naive: 38.9548 wallclock secs (15.16 usr 23.79 sys + 0.00 cusr 0.00 csys
= 38.95 CPU) @ 25.67/s (n=1000)
Rate aristotle naive blazar
aristotle 25.4/s -- -1% -5%
naive 25.7/s 1% -- -4%
blazar 26.8/s 5% 4% --
| [reply] [d/l] [select] |