Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re^3: Rosetta Code: Long List is Long (faster - vec)(faster++, and now parallel)

by Anonymous Monk
on Jan 10, 2023 at 17:45 UTC ( [id://11149505]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Rosetta Code: Long List is Long (faster - vec)
in thread Rosetta Code: Long List is Long

The good news is... The bad news is...

The good news is that I bring good news only! :) Modified J script is faster, more versatile, uses significantly less RAM, and has been tested with 9.04 engine to parallelize obvious low hanging fruits for additional speed boost.

NB. ----------------------------------------------------------- NB. --- This file is "llil4.ijs" NB. --- Run as e.g.: NB. NB. jconsole.exe llil4.ijs big1.txt big2.txt big3.txt out.txt NB. NB. --- (NOTE: last arg is output filename, file is overwritten) NB. ----------------------------------------------------------- pattern =: 0 1 NB. ========> This line has a star in its right margin =======> NB. * args =: 2 }. ARGV fn_out =: {: args fn_in =: }: args NB. PAD_CHAR =: ' ' filter_CR =: #~ ~: & CR make_more_space =: ' ' I. @ ((LF = ]) +. (TAB = ])) } ] find_spaces =: I. @: = & ' ' read_file =: {{ 'fname pattern' =. y text =. make_more_space filter_CR fread fname selectors =. (|.!.0 , {:) >: find_spaces text width =. # pattern height =. width <. @ %~ # selectors append_diffs =. }: , 2& (-~/\) shuffle_dims =. (1 0 3 & |:) @ ((2, height, width, 1) & $) selectors =. append_diffs selectors selectors =. shuffle_dims selectors literal =. < @: (}:"1) @: (];. 0) & text "_1 numeric =. < @: (0&".) @: (; @: (<;. 0)) & text "_1 extract =. pattern & { using =. 1 & \ or_maybe =. ` ,(extract literal or_maybe numeric) using selectors }} read_many_files =: {{ 'fnames pattern' =. y ,&.>/"2 (-#pattern) ]\ ,(read_file @:(; &pattern)) "0 fnames NB. * }} 'words nums' =: read_many_files fn_in ; pattern t1 =: (6!:1) '' NB. time since engine start 'words nums' =: (~. words) ; words +//. nums NB. * 'words nums' =: (\: nums)& { &.:>"_1 words ; nums words =: ; nums < @ /:~/. words t2 =: (6!:1) '' NB. time since engine start text =: , words ,. TAB ,. (": ,. nums) ,. LF erase 'words' ; 'nums' text =: (#~ ~: & ' ') text text fwrite fn_out erase < 'text' t3 =: (6!:1) '' NB. time since engine start echo 'Read and parse input: ' , ": t1 echo 'Classify, sum, sort: ' , ": t2 - t1 echo 'Format and write output: ' , ": t3 - t2 echo 'Total time: ' , ": t3 echo '' echo 'Finished. Waiting for a key...' stdin '' exit 0

Code above doesn't (yet) include any 9.04 features and runs OK with 9.03, but I found 9.04 slightly faster in general. I also found 9.04 a bit faster on Windows, it's opposite to what I have seen for 9.03 (script ran faster on Linux), let's shrug it off because of 9.04 beta status and/or my antique PC. Results below are for beta 9.04 on Windows 10 (RAM usage taken from Windows Task Manager):

> jconsole.exe llil4.ijs big1.txt big2.txt big3.txt out.txt Read and parse input: 1.501 Classify, sum, sort: 2.09 Format and write output: 1.318 Total time: 4.909 Finished. Waiting for a key... Peak working set (memory): 376,456K

There are 3 star-marked lines. To patch for 9.04 new features to enable parallelization, replace them with these counterparts:

{{ for. i. 3 do. 0 T. 0 end. }} '' ,&.>/"2 (-#pattern) ]\ ,;(read_file @:(; &pattern)) t.'' "0 fnames 'words nums' =: (~.t.'' words) , words +//. t.'' nums

As you see, 1st line replaces comment, 2nd and 3d lines require just minor touches. 2nd line launches reading and parsing of input files in parallel. 3d line says to parallelize filtering for unique words and summing numbers according to words classification. Kind of redundant double work, even as it was, as I see it. The 1st line starts 3 additional worker threads. I don't have more cores with my CPU anyway, and this script has no work easily dispatched to more workers. Then:

Read and parse input: 0.992 Classify, sum, sort: 1.849 Format and write output: 1.319 Total time: 4.16

I would call my parallelization attempt, however crude it was, a success. Next is output for our second "official" dataset in this thread:

> jconsole.exe llil4.ijs long1.txt long2.txt long3.txt out.txt Read and parse input: 1.329 Classify, sum, sort: 0.149 Format and write output: 0.009 Total time: 1.487

########################################################

These are my results for latest C++ solution (compiled using g++), to compare my efforts to:

$ ./llil2vec_11149482 big1.txt big2.txt big3.txt >vec.tmp llil2vec start get_properties CPU time : 3.41497 secs emplace set sort CPU time : 1.04229 secs write stdout CPU time : 1.31578 secs total CPU time : 5.77311 secs total wall clock time : 5 secs $ ./llil2vec_11149482 long1.txt long2.txt long3.txt >vec.tmp llil2vec start get_properties CPU time : 1.14889 secs emplace set sort CPU time : 0.057158 secs write stdout CPU time : 0.003307 secs total CPU time : 1.20943 secs total wall clock time : 2 secs $ ./llil2vec_11149482 big1.txt big2.txt big3.txt >vec.tmp llil2vec (fixed string length=6) start get_properties CPU time : 2.43187 secs emplace set sort CPU time : 0.853877 secs write stdout CPU time : 1.33636 secs total CPU time : 4.62217 secs total wall clock time : 5 secs

I noticed that new C++ code, supposed to be faster, is actually slower (compared to llil2grt) with "long" dataset from two "official" datasets used in this thread.

Replies are listed 'Best First'.
Re^4: Rosetta Code: Long List is Long (faster - vec)(faster++, and now parallel)
by eyepopslikeamosquito (Archbishop) on Jan 11, 2023 at 11:49 UTC

    More impressive analysis from our mysterious anonymonk.

    I noticed that new C++ code, supposed to be faster, is actually slower (compared to llil2grt) with "long" dataset from two "official" datasets used in this thread

    Good catch! I was so hyper-focused on trying to beat the lowest time for the simple "big1.txt big2.txt big3.txt" test case that most folks were using, I didn't even test the "long1.txt long2.txt long3.txt" test case. :( With a few seconds thought it seems obvious that very long words will favour the hash-based approach over the vector-based one. I'm pleased in a way because my hash-based solution is so much simpler and clearer than my vector-based one ... plus it encourages us to explore different approaches.

    I'm happy for folks to continue to claim the fastest time for the short string big1.txt big2.txt big3.txt test case. That's like the 100 metre sprint. We should add a marathon world record for fastest long1.txt long2.txt long3.txt long4.txt long5.txt long6.txt test case. And maybe a 1500 metre world record for big1.txt big2.txt big3.txt long1.txt long2.txt long3.txt. :)

    Update: Timings for these three Olympic Events

    Run under Ubuntu on my modest laptop against the code here: Interim version of llil3grt.cpp and llil4vec-tbb.cpp.

    llil4vec easily won the 100 metre sprint; llil3grt easily won the marathon; llil4vec narrowly won the decider, the 1500 metres.

    --------------------- 100 metre sprint ------------------------------- +---- $ ./llil3grt big1.txt big2.txt big3.txt >grt.tmp llil3grt start get_properties CPU time : 2.63733 secs emplace set sort CPU time : 1.1302 secs write stdout CPU time : 1.17949 secs total CPU time : 4.94712 secs $ NUM_THREADS=6 ./llil4vec-tbb big1.txt big2.txt big3.txt >4vec.tmp llil4vec-tbb (fixed string length=6) start use TBB get properties time : 0.395106 secs sort properties time : 0.325554 secs emplace set sort time : 0.701055 secs write stdout time : 0.541473 secs total time : 1.96334 secs --------------------------------- 1500 metre ------------------------- +---- $ ./llil3grt big1.txt big2.txt big3.txt long1.txt long2.txt long3.txt + >big-long-grt.tmp llil3grt start get_properties CPU time : 3.13399 secs emplace set sort CPU time : 1.0814 secs write stdout CPU time : 1.30358 secs total CPU time : 5.51907 secs $ NUM_THREADS=6 ./llil4vec-tbb big1.txt big2.txt big3.txt long1.txt lo +ng2.txt long3.txt >big-long-4vec.tmp llil4vec-tbb start use TBB get properties time : 1.05054 secs sort properties time : 1.13865 secs emplace set sort time : 1.16597 secs write stdout time : 0.449603 secs total time : 3.80495 secs ---------------------------------- marathon -------------------------- +---- $ ./llil3grt long1.txt long2.txt long3.txt long4.txt long5.txt long6.t +xt >long-long-3grt.tmp llil3grt start get_properties CPU time : 0.846717 secs emplace set sort CPU time : 0.005067 secs write stdout CPU time : 0.002561 secs total CPU time : 0.854441 secs $ NUM_THREADS=6 ./llil4vec-tbb long1.txt long2.txt long3.txt long4.txt + long5.txt long6.txt >long-long-4vec.tmp llil4vec-tbb start use TBB get properties time : 2.36002 secs sort properties time : 1.28398 secs emplace set sort time : 0.118658 secs write stdout time : 0.001721 secs total time : 3.76457 secs ---------------------------------------------

Re^4: Rosetta Code: Long List is Long (faster - vec)(faster++, and now parallel)
by Anonymous Monk on Jan 10, 2023 at 17:54 UTC

    What follows are mostly my attempts to describe and explain i.e. fair amount of prose, perhaps not worth a copper (insert name for local coin of lowest denomination).

    First of all, thanks for warm words marioroy, I'm glad you liked this cute toy. I think you'll be particularly interested in parallelization, how they do it in J. And BTW the key to press is Ctrl-D, to "slurp" from STDIN.

    Plus, to keep this node from being completely off-topic among Perl congregation -- some monks might also be interested in, that, if any FFI module for Perl is allowed to be used for work or play, it's very easy to execute J phrases, calling DLL and reading/writing packed Perl scalars as J data. It's been awhile since I had that fun, but, e.g., a link [1] to get started, it is about calling J from J REPL, but I remember those experiments were quite useful for learning. Whoever gets interested will easily find more tutorials.

    In my 1st script, I actually posted code which is borderline buggy, and felt uneasy about it since previous year:

    text =: , freads " 0 fn_in NB. wrong text =: ; < @ freads " 0 fn_in NB. fixed

    What's right of comma in 1st line, reads into N x M character table (then comma converts it into 1D list), number of files by longest file size, with short files padded. If files were not same size, result would be erroneous. Further, if 100 files were 1 MB, and 101-st file is 1 GB, then this would end with "out of RAM" or similar error. Neither line is used in modified code, but with this off my chest, let's move on.

    I noticed that very slow part of my solution was (and still is) reading and parsing. Moreover this trivial part (not subsequent sorting!) is main RAM consumer during script run. Whatever I tried didn't help much. Then I decided if that's so -- let's make it at least potentially useful for the future.

    Instead of ad hoc code to consume strictly 2-column input, I wrote a function ("verb") to read from (well-formed) TSV file with any number of columns, with a "pattern" provided to describe how to read columns, as either text or numbers. In our case, pattern is obviously 0 1. To keep script interface consistent, this value is hard-coded, but should be e.g. CLI argument, of course.

    Part of what is a "well-formed" TSV file for this verb, is that "text" doesn't contain ASCII-32 space character. You maybe noticed I commented-out PAD_CHAR definition, because changing it won't be very useful. Some more substantial changes would be required to allow ASCII-32 (and then to pad with ASCII-0, perhaps), because frame-fill for literal data type is space character, nothing else. I'm keeping the line commented-out as a reminder.

    Which leads us to the essential question -- why keep textual data as padded N x M table, instead of keeping strings of text (they may be any different lengths then) each in their own "box"? Box in J is atomic data type and sort of pointer to whatever is inside. Data structures of any complexity are implemented with nested boxes.

    The answer is -- because boxes are relatively expensive [2]. A few thousands (or 1e5 or so) are OK. An array of 10 million boxes (total number of lines in our dataset) easily eats a few GB of RAM and is very slow.

    This also explains why I didn't use DSV addon [3] from standard library. For similar reasons (while I'm at explaining this stage of my route of decision making) I decided against symbols [4] (irreducible and shared GST (global symbol table)? Don't know what to think of it).

    So, textual columns are parsed into padded 2D tables, but then I saw that slurping all files into single string prior to parsing is not required at all, if files are not too tiny. And indeed, RAM requirements for the script dropped significantly then. Further, parsing file by file makes obvious choice as to where to try to parallelize.

    To finish with data input, the CRs are filtered out, in modified script, unconditionally, and, also unconditionally, NOT injected on output, in both OSes. And BTW, for fread and fwrite from standard library there exist their counterparts freads and fwrites to strip CRs. Don't know why, they are slow compared to manual filtering. The latter many seconds slower, not even used in original script. The modified script got rid of freads, also.

    Moving on to how sorting was done in 1st script version -- actually with somewhat complicated and slow code, trying to sort word sub-ranges (per the same numeric count) without creating intermediate boxed arrays. In modified script, this is replaced with just a short simple phrase. As long as there are less than a few millions of unique counts, this would be OK w.r.t. RAM usage (but 1st version would get very slow then anyway).

    This leads to new verb in 9.04 version (Key dyad [5]). I thought using it would help to "pack" pairs of words and numbers; then do kind of "packed sort" trick. It turned out to be a few times slower, I didn't investigate and I abandoned this route.

    Moving to data output, requiring user to provide "numeric width" or whatever it was called was quite silly on my part; computing decimal logarithm of maximum would be OK :) And even so, it turned out that J formats numeric column (as opposed to row i.e. 1D array) automatically padded as required.

    My final thoughts would be that multithreading is new feature in 9.04 version, as I already said. Documentation says [6]:

    Even if your code does not create tasks, the system can take advantage of threadpool 0 to make primitives run faster

    At first I thought it means that simply spawning workers would be enough for J to run parallel. Kind of how e.g. PDL does [7] (or tries to do). It turned out to be not so, either because script is very simple, or this automatic parallelization is not yet enabled in beta. We'll see.

    1. https://code.jsoftware.com/wiki/Guides/DLLs/Calling_the_J_DLL
    2. https://www.jsoftware.com/help/jforc/performance_measurement__tip.htm#_Toc191734575
    3. https://code.jsoftware.com/wiki/Addons/tables/dsv
    4. https://code.jsoftware.com/wiki/Vocabulary/sco
    5. https://code.jsoftware.com/wiki/Vocabulary/slashdot#dyadic
    6. https://code.jsoftware.com/wiki/Vocabulary/tcapdot#dyadic
    7. https://metacpan.org/dist/PDL/view/Basic/Pod/ParallelCPU.pod
    
Re^4: Rosetta Code: Long List is Long (outgunned?!)
by Anonymous Monk on Jan 17, 2023 at 18:44 UTC

    E-hm... hello? Are we still playing?

    Long thread is long. Should have known better before even beginning to think to start mumbling "paralle..li...", because of massive barrage of fire that ensued immediately after :). Hoping I chose correct version to test, and my dated PC (number of workers in particular) is poor workbench, but:

    $ time ./llil3vec_11149482 big1.txt big2.txt big3.txt >vec6.tmp llil3vec (fixed string length=6) start get_properties CPU time : 1.80036 secs emplace set sort CPU time : 0.815786 secs write stdout CPU time : 1.39233 secs total CPU time : 4.00856 secs total wall clock time : 4 secs real 0m4.464s user 0m3.921s sys 0m0.445s $ time ./llil3vec_11149482_omp big1.txt big2.txt big3.txt >vec6.tmp llil3vec (fixed string length=6) start get_properties CPU time : 2.06675 secs emplace set sort CPU time : 0.94937 secs write stdout CPU time : 1.40311 secs total CPU time : 4.41929 secs total wall clock time : 4 secs real 0m3.861s user 0m4.356s sys 0m0.493s

    ----------------------------------------------

    Then I sent my workers to retirement to plant or pick flowers or something i.e. (temporarily) reverted to single-threaded code, walked around (snow, no flowers), made a few changes, here's comparing previous and new versions:

    $ time ../j903/bin/jconsole llil4.ijs big1.txt big2.txt big3.txt out_j +.txt Read and parse input: 1.6121 Classify, sum, sort: 2.23621 Format and write output: 1.36701 Total time: 5.21532 real 0m5.220s user 0m3.934s sys 0m1.195s $ time ../j903/bin/jconsole llil5.ijs big1.txt big2.txt big3.txt out_j +.txt Read and parse input: 1.40811 Classify, sum, sort: 1.80736 Format and write output: 0.373946 Total time: 3.58941 real 0m3.594s user 0m2.505s sys 0m0.991s $ diff vec6.tmp out_j.txt $

    New script:

    NB. ----------------------------------------------------------- NB. --- This file is "llil5.ijs" NB. --- Run as e.g.: NB. NB. jconsole.exe llil5.ijs big1.txt big2.txt big3.txt out.txt NB. NB. --- (NOTE: last arg is output filename, file is overwritten) NB. ----------------------------------------------------------- pattern =: 0 1 args =: 2 }. ARGV fn_out =: {: args fn_in =: }: args filter_CR =: #~ ~: & CR read_file =: {{ 'fname pattern' =. y text =. TAB, filter_CR fread fname text =. TAB (I. text = LF) } text selectors =. I. text = TAB width =. # pattern height =. width <. @ %~ # selectors append_diffs =. }: , 2& (-~/\) shuffle_dims =. (1 0 3 & |:) @ ((2, height, width, 1) & $) selectors =. append_diffs selectors selectors =. shuffle_dims selectors literal =. < @: (}."1) @: (];. 0) & text "_1 numeric =. < @: (0&".) @: (; @: (<;. 0)) & text "_1 extract =. pattern & { using =. 1 & \ or_maybe =. ` ,(extract literal or_maybe numeric) using selectors }} read_many_files =: {{ 'fnames pattern' =. y ,&.>/"2 (-#pattern) ]\ ,(read_file @:(; &pattern)) "0 fnames }} 'words nums' =: read_many_files fn_in ; pattern t1 =: (6!:1) '' NB. time since engine start idx =: i.~ words nums =: idx +//. nums idx =: nums </. ~. idx words =: (/:~ @: { &words)&.> idx erase < 'idx' nums =: ~. nums 'words nums' =: (\: nums)& { &.:>"_1 words ; nums t2 =: (6!:1) '' NB. time since engine start text =: ; words (, @: (,"1 _))&.(>`a:)"_1 TAB ,. (": ,. nums) ,. LF erase 'words' ; 'nums' text =: (#~ ~: & ' ') text text fwrite fn_out erase < 'text' t3 =: (6!:1) '' NB. time since engine start echo 'Read and parse input: ' , ": t1 echo 'Classify, sum, sort: ' , ": t2 - t1 echo 'Format and write output: ' , ": t3 - t2 echo 'Total time: ' , ": t3 exit 0 echo '' echo 'Finished. Waiting for a key...' stdin '' exit 0

    ----------------------------------------------

    I don't know C++ "tools" chosen above ("modules" or whatever they called) at all; is capping the length to "6" in code just matter of convenience; any longer value could be hard-coded instead, like "12" or "25" (with obvious other fixes)? I mean, no catastrophic (cubic, etc.) slow-down would happen to sorting after some threshold? Therefore forcing to comment-out the define and use alternative set of "tools"? Perhaps input would be slower if cutting to unequally long words is expected?

    Anyway, here's output if the define is commented-out:

    $ time ./llil3vec_11149482_no6 big1.txt big2.txt big3.txt >vec6.tmp llil3vec start get_properties CPU time : 3.19387 secs emplace set sort CPU time : 0.996694 secs write stdout CPU time : 1.32918 secs total CPU time : 5.5198 secs total wall clock time : 6 secs real 0m6.088s user 0m5.294s sys 0m0.701s $ time ./llil3vec_11149482_no6_omp big1.txt big2.txt big3.txt >vec6.tm +p llil3vec start get_properties CPU time : 3.99891 secs emplace set sort CPU time : 1.13424 secs write stdout CPU time : 1.41112 secs total CPU time : 6.54723 secs total wall clock time : 4 secs real 0m4.952s user 0m6.207s sys 0m0.842s

    Should my time be compared to them? :) (Blimey, my solution doesn't have to compete when participants are capped selectively (/grumpy_on around here)). Or I can use powerful magic secret turbo mode:

    turbo_mode_ON =: {{ assert. 0 <: c =. 8 - {: $y h =. (3 (3!:4) 16be2), ,|."1 [3 (3!:4)"0 (4:,#,1:,#) y 3!:2 h, ,y ,"1 _ c # ' ' }} turbo_mode_OFF =: {{ (5& }. @: (_8& (]\)) @: (2& (3!:1))) &.> y }}

    Inject these definitions, and these couple lines immediately after t1 =: and before t2 =: respectively:

    words =: turbo_mode_ON words words =: turbo_mode_OFF words

    Aha:

    $ time ../j903/bin/jconsole llil5.ijs big1.txt big2.txt big3.txt out_j +.txt Read and parse input: 1.40766 Classify, sum, sort: 1.24098 Format and write output: 0.455868 Total time: 3.1045 real 0m3.109s user 0m1.815s sys 0m1.210s

    (and no cutting to pieces of pre-defined equal length was used ...yet) :)

    ----------------------------------------------

    I can revert to parallel reading/parsing anytime, with effect as shown in parent node. As implemented, it was kind of passive; but files can be unequal sizes, or just one huge single file. I think serious solution would probe inside to find newlines at approx. addresses, then pass chunks coords to workers to parse in parallel.

    Puny 2-workers attempt to sort, in parent, was just kind of #pragma omp parallel sections... thing with 2 sections; no use to send bus-loads of workers and expect quiet fans. There's some hope for "parallelizable primitives" in release (not beta) 9.04 or later. Maybe it's long time to wait. Or, if I could write code to merge 2 sorted arrays faster than built-in primitive sorts any of the halves -- then, bingo, I have multi-threaded fast merge-sort. But no success yet, the built-in sorts one large array faster, in single-thread.

      What a delight for our Anonymonk friend to come back. Thanks to you, we tried parallel :).

      ... but files can be unequal sizes, or just one huge single file. I think serious solution would probe inside to find newlines at approx. addresses, then pass chunks coords to workers to parse in parallel.

      Chuma mentions 2,064 input files in the initial "Long list is long" thread. Processing a list of files in parallel is suited for this use case due to many files. Back in 2014, I wrote utilities that support both chunking and list modes; mce_grep and egrep.pl via --chunk-level={auto|file|list}.

      llil5p.ijs

      I took llil5.ijs and created a parallel version named llil5p.ijs, based on code-bits from your prior post. The number of threads can be specified via the NUM_THREADS environment variable.

      $ diff -u llil5.ijs llil5p.ijs --- llil5.ijs 2023-01-18 09:25:14.041515970 -0600 +++ llil5p.ijs 2023-01-18 09:25:58.889669110 -0600 @@ -9,6 +9,12 @@ pattern =: 0 1 +nthrs =: 2!:5 'NUM_THREADS' NB. get_env NUM_THREADS +{{ + if. nthrs do. nthrs =: ".nthrs end. NB. string to integer conversio +n + for. i. nthrs do. 0 T. 0 end. NB. spin nthrs +}} '' + args =: 2 }. ARGV fn_out =: {: args fn_in =: }: args @@ -44,7 +50,7 @@ read_many_files =: {{ 'fnames pattern' =. y - ,&.>/"2 (-#pattern) ]\ ,(read_file @:(; &pattern)) "0 fnames + ,&.>/"2 (-#pattern) ]\ ,;(read_file @:(; &pattern)) t.'' "0 fnames }} 'words nums' =: read_many_files fn_in ; pattern

      llil5tp.ijs

      Next, I applied the turbo update to the parallel version and named it llil5tp.ijs.

      $ diff -u llil5p.ijs llil5tp.ijs --- llil5p.ijs 2023-01-18 09:25:58.889669110 -0600 +++ llil5tp.ijs 2023-01-18 09:26:01.553736512 -0600 @@ -21,6 +21,16 @@ filter_CR =: #~ ~: & CR +turbo_mode_ON =: {{ + assert. 0 <: c =. 8 - {: $y + h =. (3 (3!:4) 16be2), ,|."1 [3 (3!:4)"0 (4:,#,1:,#) y + 3!:2 h, ,y ,"1 _ c # ' ' +}} + +turbo_mode_OFF =: {{ + (5& }. @: (_8& (]\)) @: (2& (3!:1))) &.> y +}} + read_file =: {{ 'fname pattern' =. y @@ -56,6 +66,7 @@ 'words nums' =: read_many_files fn_in ; pattern t1 =: (6!:1) '' NB. time since engine start +words =: turbo_mode_ON words idx =: i.~ words nums =: idx +//. nums @@ -65,6 +76,7 @@ nums =: ~. nums 'words nums' =: (\: nums)& { &.:>"_1 words ; nums +words =: turbo_mode_OFF words t2 =: (6!:1) '' NB. time since engine start text =: ; words (, @: (,"1 _))&.(>`a:)"_1 TAB ,. (": ,. nums) ,. LF

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11149505]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others browsing the Monastery: (6)
As of 2024-04-25 08:31 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found