Compare it also with using a c-style for loop in perl which requires at least one additional variable (index) and one additional constant (the stop condition)., and has the downside that the number of repetitions of the loop can be affected during the iteration of the loop, by adjusting/corrupting the current values of either the iterator or the stop condition.And how do you think Perl allows you to do a foreach? Just because you don't see the bookkeepping at a language level doesn't mean it doesn't exist. Besides, with a foreach loop, you get an additional variable as well - the iterator. Even if there's non mentioned in your program source, you still get a new $_.
You say the number of iterations can be effected by changing the value of the iterator (I don't see how you can change the stop condition, C doesn't have an eval, and even Perl doesn't allow you to change code that was already compiled), and call that a downside. I would call that a feature, making a C-style for more flexible than a Perl-style foreach. For instance, try to translate:
for (my $i = 0; $i < @array; $i += 2) { ... }
to a foreach style. It's not easy.
In the perl type for/foreach loop, the algorithm is entirely defined by the data, so the statement that the algorithm is the data starts to make some sense.
What algorithm are we talking about? A algorithm might say "iterate over the elements of the set". A for or a foreach might be the implementation of that algorithm in a specific computer language, but that's not the algorithm itself.
If you are using C, you might use a recursive routine that starts with the first element, then scans the rest of the array, counting and marking all the similar elements it finds. It then saves the value and associated count on the stack whilst it recurses to find, count and mark the second element and so on until a pass completes without finding any unmarked elements. If the routine also keeps a global count of the number of passes as it recursed, it can now allocate the space to hold the counts and populate it as the recursion unwinds. As you can see, this is a multi-pass process that will consumes prodigious amounts of stack if the array is large and has many unique elements. Even given the speed of C, this is not going to be fast, and as each programmer will have to roll-his-own solution everytime this requirement comes up, it is costly and time consuming to code and carries a high risk of bugs.
Oh, come on, get real. Just because something is written in C doesn't mean it's using stupid algorithms. One can use hashes in C as well (how do you think Perl hashes are implemented in perl?), and any programmer worth its salt will do so. It's an elementary programming technique. Noone in its right mind is going to employ the algorithm you are proposing.
And indeed, you don't know how many elements the hash is going to need (although there's a trivial upper bound!). But then, you don't know that in Perl either, do you? And just as perl can grow a hash on demand, you can grow a hash in C as well. Amazing, isn't? (Oh, could it just be that perl is written in C?)
The simplicity of doing this in perl is not due to any particular high-level language feature of perl, but is directly attributable to the high-level data-structures that perl provides. This allows the capturing of this complex algorithm directly in the data-structure (hash) used to accumulate the counts.
What high level data structures? Perl has arrays and hashes. That's it. Nothing that would give you elements in some order. The only efficient search that you can do are exact matches using hashes. But no range queries, nearest neighbour, next, not even minimum.
Of course you can build all this stuff, but then, so you can with C.
Abigail
In reply to Re: Algorithms, Datastructures and syntax
by Abigail-II
in thread Algorithms, Datastructures and syntax
by BUU
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |