in reply to Re^10: list of unique strings, also eliminating matching substrings
in thread list of unique strings, also eliminating matching substrings

hmm ... actually I didn't want to invest time coding for this xy-nonsense.

Sorry, but no one forced you to. You asked questions, I did my best to answer them. That''s all.

FWIW: Converted to a form that allows it to be used in a realistic and repeatable scenario:

#! perl -slw use strict; use Time::HiRes qw[ time ]; $|++; sub uniq{ my %x; @x{@_} = (); keys %x } my $start = time; my @uniq = uniq <>; chomp @uniq; @uniq = sort{ length $b <=> length $a } @uniq; my $longest = shift @uniq; for my $x ( @uniq ) { next if 1+ index $longest, $x; print $x; $longest .= "\n" . $x; } printf STDERR "Took %.3f\n", time() - $start;

Whilst you're ~10% quicker on low numbers of strings:

c:\test>906020 906020.10e3 > 906020.filtered Took 48.854 c:\test>wc -l 906020.filtered 5000 906020.filtered c:\test>906020-lanx 906020.10e3 > lanx.filtered Took 43.122 c:\test>wc -l lanx.filtered 4999 lanx.filtered c:\test>906020 906020.10e3 > 906020.filtered (inline version) Took 21.744 c:\test>wc -l 906020.filtered 5000 906020.filtered

As your own timings show, as the numbers of strings increase, the cost of constantly reallocating your accumulator string in order to append the new one starts to dominate. I suspect that by the time you get to the OPs 200,000 strings you going to be considerably slower. (You also have an out-by-one error somewhere, but that is probably easily fixed.)


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^12: list of unique strings, also eliminating matching substrings
by LanX (Saint) on Jun 03, 2011 at 15:02 UTC
    ATM I can't see any influence of string allocation.

    It's simple your version's performance is proportional to the input size, mine is proportional to output.

    In rare cases where there are almost no strings to be excluded - i.e. output nearly input - my version can be slightly slower.

    In other words if only 10% of all sequences remain after filtering, my algo is about 10 times faster.

    BTW: using "\n" instead of chr(0) was a stupid idea, the extra byte is expensive.

    Cheers Rolf

      ATM I can't see any influence of string allocation.

      Try it with 200,000 strings where half are to be excluded, then you'll see the affects of allocating and copying 300 bytes; then allocating and copying 600 bytes & freeing 600; then allocating and copying 900 bytes & freeing 900; ... 99,995 allocs/copies/frees omitted ...; then allocating copying 29,999,700 bytes & freeing 29.9997MB; then allocating and copying 30,000,000 bytes & freeing 30MB.

      Each time you do $longest .= "\n" . $x; perl has allocate a new chuck of memory big enough to accommodate $longest + $x; then copy those two into the newly allocated space, then free both the originals. And as each freed allocation is not big enough to accommodate the next iteration of append, each new allocation (once you get past trivial amounts) requires Perl to go to the OS for a new chunk of virtual memory. And that gets very costly.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.
        allocating the maximum needed string right away and setting it to "" afterwards seems to help.
        DB<109> $w=""; $a="x" x 300; $t=time; $w.=$a for (1..200_000); prin +t time-$t, ":", length $w 9:60000000 DB<110> $x="x" x 60000000; $x=""; $a="x" x 300; $t=time; $x.=$a for +(1..200_000); print time-$t, ":", length $x 0:60000000

        Anyway 9 seconds are not that much...

        Calculating 200_000 on my limited system takes hours, especially if I start with your algo...maybe next night

        Cheers Rolf