1. starting with the longest string and continuing in descending order
I don't get the idea of putting the longest first?
The idea of putting the shortest first is that you can use the third parameter to index to skip over the shorter strings as you've checked them. Longer strings can never be contained by the shorter ones, and starting the search part way into the string is much cheaper than trimming the shorter ones off the end.
2. then only appending the non-embeddable strings to $all
I do not know what you mean by "non-embeddable" in this context?
I'm also wondering if the reallocation of new memory when appending to $all could be avoided by starting with a maximal length string and then shortening $all again.
If you mean counting the space required for $all, allocating to that final size and then copying the elements into the string--rather than building it up by appending each element in turn--that is exactly what join does.
Maybe uniq() from List::MoreUtils is faster
Not in my tests. Mine usually works out ~15% faster.
or could be completely avoided (after sorting identical strings always appear in a sequence)
That would mean sorting the duplicates. Sorting is O(N log N); de-duping O(1). And after the sorting, you;d still need to make a complete pass with grep to remove the dups before joining.
In reply to Re^9: list of unique strings, also eliminating matching substrings
by BrowserUk
in thread list of unique strings, also eliminating matching substrings
by lindsay_grey
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |