Beefy Boxes and Bandwidth Generously Provided by pair Networks
Problems? Is your data what you think it is?
 
PerlMonks  

Re^2: Suffix-prefix matching done right

by karlgoethebier (Abbot)
on Nov 05, 2021 at 18:16 UTC ( [id://11138479]=note: print w/replies, xml ) Need Help??


in reply to Re: Suffix-prefix matching done right
in thread Suffix-prefix matching done right

«…or just IPC::Run to run a pool of C++ workers…»

I just wondered how this code would look. Or how you would do it - as you like. Regards, Karl

«The Crux of the Biscuit is the Apostrophe»

  • Comment on Re^2: Suffix-prefix matching done right

Replies are listed 'Best First'.
Re^3: Suffix-prefix matching done right
by NERDVANA (Deacon) on Nov 06, 2021 at 09:14 UTC
    I was expecting IPC::Run to be the simplest way to orchestrate a pool of non-perl workers that stream text, but it might not actually be any less effort than than MCE; this bit of code took longer to write than I expected. It reads STDIN in large chunks, and hands off those chunks to the input queues of the worker with the least in its input queue. It also takes care to divide the input on line boundaries so that it isn't splitting a line between different workers. But, it should be much more efficient than the simpler design of reading STDIN one line at a time.

    I tested it with "wc" as a simple way to verify that all input lines were seen by a worker.

    $ time perl parallel.pl wc < LARGE_FILE (snip)... Waiting for workers to finish 955 5389459 56444912 252 6883234 68045448 284 3803959 62591850 422 6429932 63866545 real 0m1.438s user 0m3.864s sys 0m0.242s $ time wc < LARGE_FILE 1913 22506581 250948755 real 0m2.332s user 0m2.277s sys 0m0.052s

    parallel.pl

    use v5.20; use strict; use warnings; use IPC::Run 'start'; use List::Util qw/ all any reduce /; my $workers= 4; my $read_size= 512*1024; my $queue_full= 256*1024; my $command= [@ARGV]; # Start a pool of 10 workers my @jobs= map { { id => $_, in => '', out => '' } } 1..$workers; my @run_spec= map { ('&', $command, '<', \$_->{in}, '>', \$_->{out}) } + @jobs; shift @run_spec; my $harness= start(@run_spec); my $buf= ''; my $got= 1; while ($got) { # The reading of input could just use $buf .= <STDIN> but this is +much more efficient, # reading 1MB at a time. $got= read(STDIN, $buf, $read_size, length $buf); if ($got) { # look for the final newline within the buffer. This makes su +re we only pass whole # lines to the workers my $end= rindex $buf, '\n'; next unless $end >= 0; # append the lines to the input queue of the worker with the s +mallest input queue $jobs[0]{in} .= substr($buf, 0, $end+1, ''); say "Add ".($end+1)." bytes to job $jobs[0]{id}"; } elsif (!defined $got) { next if $!{EINTR} || $!{EAGAIN}; die "read: $!"; } # No more STDIN, so take any leftover string and add it to an inpu +t buffer elsif (length $buf) { $jobs[0]{in} .= $buf; say "Add ".length($buf)." bytes to job $jobs[0]{id}"; } # stay in this loop until there is room on one of the input buffer +s, # or after EOF, stay here until all input buffers are flushed while ($got? (all { length $_->{in} > $queue_full } @jobs) : (any { length $_->{in} > 0 } @jobs) ) { # say "I/O: ".join(' ', map sprintf("[%8d:%8d]", length $_->{ +in}, length $_->{out}), @jobs); # send input, receive output $harness->pump; # process all output from jobs so far process_results(); } # sort the worker with the smallest input queue to the front @jobs= sort { length $a->{in} <=> length $b->{in} } @jobs; } say "Waiting for workers to finish"; # Close the input pipes and wait for workers to exit $_->{in}= undef for @jobs; $harness->finish; # process all the rest process_results(); sub process_results { for (@jobs) { my $end= rindex $_->{out}, "\n"; print substr($_->{out}, 0, $end+1, '') if $end >= 0; } }
      I seriously doubt parallelization is the solution here.

      Reading the data from disk will take far more time than processing the data with a clever algorithm.

      Cheers Rolf
      (addicted to the Perl Programming Language :)
      Wikisyntax for the Monastery

        A correct edit-distance algorithm is going to be at least O(N^2) and maybe even O(N^3) to check all the different lengths and some strings can be 2000 chars. My code above was able to beat linear "wc". Of course the file was probably entirely cached in ram since I have a lot of that on this system. But I would not be surprised if a SSD could deliver data faster than an N^3 could process it.

        What about a correct algorithm plus parallelization? If this problem is embarrassingly parallel. And I think it is. Or may be it is. I always have doubts. Best regards, Karl

        «The Crux of the Biscuit is the Apostrophe»

      Dear friends of the monastery,

      Upon searching, there lives a MCE wc.pl example. Please note that timings for wc.pl include the time to load Perl itself plus modules and spawn/reap workers.

      Preparation:

      $ curl -O http://www.textfiles.com/etext/FICTION/lesms10.txt $ curl -O https://raw.githubusercontent.com/marioroy/mce-examples/mast +er/other/wc.pl $ chmod 755 wc.pl $ rm -f lesms10_big.txt $ for i in `seq 1 200`; do cat lesms10.txt >> lesms10_big.txt; done $ ls -lh lesms10* -rw-r--r-- 1 pmuser staff 3.1M Nov 13 19:27 lesms10.txt -rw-r--r-- 1 pmuser staff 623M Nov 13 19:39 lesms10_big.txt # Read once before testing to be fair for 1st and subsequent wc/wc.pl. # Sorry, I didn't want to go through the trouble clearing OS cache. $ cat lesms10_big.txt >/dev/null

      Results: wc

      $ time wc lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m2.132s user 0m2.076s sys 0m0.056s $ time wc < lesms10_big.txt 14152200 113528600 652751200 real 0m2.140s user 0m2.085s sys 0m0.055s

      Results: wc.pl

      $ time ./wc.pl lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m0.536s (3.98x) user 0m3.570s sys 0m0.591s $ time ./wc.pl < lesms10_big.txt 14152200 113528600 652751200 real 0m0.740s (2.89x) user 0m3.784s sys 0m1.166s

      Results: line count

      $ time wc -l lesms10_big.txt 14152200 lesms10_big.txt real 0m0.351s user 0m0.296s sys 0m0.055s $ time ./wc.pl -l lesms10_big.txt 14152200 lesms10_big.txt real 0m0.208s user 0m0.678s sys 0m0.835s

      Results: word count

      $ time wc -w lesms10_big.txt 113528600 lesms10_big.txt real 0m2.125s user 0m2.071s sys 0m0.054s $ time ./wc.pl -w lesms10_big.txt 113528600 lesms10_big.txt real 0m0.535s user 0m3.574s sys 0m0.599s

      Results: byte count

      $ time wc -c lesms10_big.txt 652751200 lesms10_big.txt real 0m0.003s user 0m0.001s sys 0m0.002s $ time ./wc.pl -c lesms10_big.txt 652751200 lesms10_big.txt real 0m0.054s user 0m0.042s sys 0m0.010s

      Clearing OS cache:

      The beliefs of yesterday may no longer be true today. These days, running on one core may be the bottleneck versus IO. Especially for hardware IO reaching gigabytes and beyond. Long ago, I removed old beliefs no longer matching with reality. This mind shift energized me to try, only to be surprised. So kudos to the OP for trying.

      Linux: sudo echo 1 >/proc/sys/vm/drop_caches macOS: sudo purge $ time wc lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m2.435s user 0m2.330s sys 0m0.098s Linux: sudo echo 1 >/proc/sys/vm/drop_caches macOS: sudo purge $ time ./wc.pl lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m0.662s (3.68x) user 0m3.598s sys 0m0.736s

      Bless to you all,

        An attempt on Linux failed clearing the file cache. Thus an update and results from a Linux box.

        This works: sudo bash -c "echo 1 >/proc/sys/vm/drop_caches"

        Results:

        $ sudo bash -c "echo 1 >/proc/sys/vm/drop_caches" $ time wc lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m3.412s user 0m3.307s sys 0m0.097s $ sudo bash -c "echo 1 >/proc/sys/vm/drop_caches" $ time ./wc.pl lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m0.613s (5.57x) user 0m4.168s sys 0m0.432s

        Beyond 8 cores:

        The --max-workers option defaults to auto (8 workers max). Running with --max-workers=N beyond 8 may provide extra performance on a box with more than 8 "real" cores.

        $ sudo bash -c "echo 1 >/proc/sys/vm/drop_caches" $ time ./wc.pl --max-workers=8 lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m0.613s (5.57x) user 0m4.168s sys 0m0.432s $ sudo bash -c "echo 1 >/proc/sys/vm/drop_caches" $ time ./wc.pl --max-workers=12 lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m0.454s (7.52x) user 0m4.464s sys 0m0.536s $ sudo bash -c "echo 1 >/proc/sys/vm/drop_caches" $ time ./wc.pl --max-workers=16 lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m0.360s (9.48x) user 0m4.344s sys 0m0.625s

        Compare wc with wc.pl 1 and 2 cores:

        To complete the demonstration, I also tried wc.pl with --maxworkers=1 and --maxworkers=2. The timings for wc.pl include loading Perl plus modules and spawning/reaping workers.

        $ sudo bash -c "echo 1 >/proc/sys/vm/drop_caches" $ time wc lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m3.412s user 0m3.307s sys 0m0.097s $ sudo bash -c "echo 1 >/proc/sys/vm/drop_caches" $ time ./wc.pl --max-workers=1 lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m3.553s user 0m3.363s sys 0m0.329s $ sudo bash -c "echo 1 >/proc/sys/vm/drop_caches" $ time ./wc.pl --max-workers=2 lesms10_big.txt 14152200 113528600 652751200 lesms10_big.txt real 0m1.804s user 0m3.339s sys 0m0.357s

        See also, egrep.pl from mce-examples.

      Thank you very much and best regards, Karl

      «The Crux of the Biscuit is the Apostrophe»

Re^3: Suffix-prefix matching done right
by Anonymous Monk on Nov 14, 2021 at 05:38 UTC

    Searching through MCE repo, I found mce_grep which ships with MCE. It depends on the OS vender including it or a separate package. The script calls an external binary, grep in this case. One can rename the file, copy, or make a symbolic link to the script itself to have it called another external binary.

    It borrows bits from IPC::Run3 for capturing STDOUT/ERR on Unix platforms, but an attempt for lesser overhead it seems. See here. So this looks to be a full-fledged demonstration for doing just that; IPC::Run-like capability to run a pool of C/C++ workers. It exists :).

    See also: The pure-Perl grep-like implementation is found in the mce-examples repo named egrep.pl.

    Digging more, there lives another full-fledged demonstration calling Inline::C and an external C++ binary. This lives in the mce-sort repo. The Inline::C bits and corresponding user function partitions the data. Sorting the individual partitions in parallel (via an external C++ binary) is done here.

      «…It exists…»

      Big surprise? That’s the way it is 🤪😎. Best regards, Karl

      «The Crux of the Biscuit is the Apostrophe»

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11138479]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (7)
As of 2024-04-24 21:25 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found