So tending to use as few as possible perl-commands which do mass-operations means to reduce overhead and delaying the task to highly optimized C.
Loops (including maps) are just multiplying the amount of executed commands (just imagine the linearized alternative which is even faster as the loop...)
so my approach is the fastest because its basically reduced to only 3 perl commands¹
1. setting a hash
2. deleting a slice from that hash
3. reading the resulting hash
OTOH my approach has drawbacks, depending on the task, it's only suitable for real sets of strings.
Arrays can contain repeated data or other datatypes like refs.
EDIT: you might be interested in Using hashes for set operations...
Cheers Rolf
PS: of course there are still loops working under the hood, but they are already optimized in C.
In reply to Re^4: Removing elemets from an array (optimization)
by LanX
in thread Removing elemets from an array
by lovelyMonk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |