Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re^5: Threads From Hell #2: How To Parse A Very Huge File

by marioroy (Prior)
on May 24, 2015 at 16:13 UTC ( [id://1127600]=note: print w/replies, xml ) Need Help??


in reply to Re^4: Threads From Hell #2: How To Parse A Very Huge File
in thread Threads From Hell #2: How To Search A Very Huge File [SOLVED]

The testing was done on a late 2013 MacBook Pro model (Haswell Core i7) at 2.6 GHz with 1600 MHz memory. Am running Parallels Desktop 9.0. The grep/wc commands and Perl scripts read the file likely residing in OS level file cache from repeated testing.

  • Comment on Re^5: Threads From Hell #2: How To Parse A Very Huge File

Replies are listed 'Best First'.
Re^6: Threads From Hell #2: How To Parse A Very Huge File
by BrowserUk (Patriarch) on May 24, 2015 at 17:39 UTC
    the file likely residing in OS level file cache from repeated testing.

    Indeed.

    That's why I used a 10GB file for my testing. I've only got 8GB of ram, so there's no way for the file to get read from cache on subsequent tests.

    In the real world where the file being searched is coming off a disk or SSD, there is no benefit to multi-tasking grep.

    Even in the extremely rare case of grepping the same file multiple times, although your numbers:

    show the a reduction in elapsed time, the cpu usage is actually 2.527/2.127 *100 = 19% higher.

    If the user is (for the sake of a term) an end-user, who types the command and hits enter, the 1 second or so saving is probably less time than it took him to decide what to type and type it; and certainly less than he will take to decide what to do with the information it produces.

    On the other hand, if the user is a sysadmin guy trying to balance the needs of many processes across a farm of servers, using that extra 19% of cpu resource is probably a bad thing.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
    In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

      The OP seemed interested if parallelism is possible for such a task. Please disregard my posts if I have thought wrong. In the spirit of parallelism, I tested a 20 GiB file under the host OS (laptop with 16 GiB) comparing the grep command, bin/mce_grep, examples/egrep.pl and the script using MCE::Loop.

      Recap: bin/mce_grep is a parallel wrapper for the grep command; examples/egrep.pl is 100% Perl code.

      I am getting the impression that you're not liking MCE. If that is the case, then I should refrain from posting here. Have you not tried MCE against your 10 GiB file; e.g. bin/mce_grep or examples/egrep.pl?

      $ ls -lh very_huge.file -rw-r--r-- 1 mario staff 20G May 24 14:53 very_huge.file ## grep command $ time grep karl very_huge.file nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl real 6m47.048s ( 407 seconds ) user 6m42.372s sys 0m 4.669s ## bin/mce_grep $ time ./MCE-1.608/bin/mce_grep karl very_huge.file nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl real 2m17.003s ( 137 seconds ) user 17m 9.223s sys 0m33.223s ## examples/egrep.pl $ time ./MCE-1.608/examples/egrep.pl karl very_huge.file nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl real 0m26.447s user 0m22.527s sys 0m 8.459s ## MCE::Loop script $ time ./mce_loop_script.pl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl nose cuke karl Took 25.650 seconds real 0m25.764s user 0m42.494s sys 0m 7.264s

      Below, the script using MCE::Loop.

      use MCE::Loop; use Time::HiRes qw( time ); MCE::Loop::init( { max_workers => 4, use_slurpio => 1 } ); my $start = time; my $pattern = 'karl'; my @result = mce_loop_f { my ($mce, $slurp_ref, $chunk_id) = @_; ## Quickly determine if a match is found. ## Basically, only process slurped chunk if true. if ($$slurp_ref =~ /$pattern/im) { my @matches; open my $MEM_FH, '<', $slurp_ref; binmode $MEM_FH, ':raw'; while (<$MEM_FH>) { push @matches, $_ if (/$pattern/); } close $MEM_FH; MCE->gather(@matches); } } 'very_huge.file'; print join('', @result); printf "Took %.3f seconds\n", time - $start;

      I have taken the time to answer the OP's request -- free time. It is not worth it anymore at this site, especially when you (being at the Pope level) seem to disprove of MCE.

      Best regards to all, -mario

        I tested a 20 GiB file under the host OS

        Okay. Let's do a little math:

        1. grep: 21474836480 / 407 * 8 = 422109800 == 422Mbits/s.

          That is very fast. Way faster than my brand new disk and SSD; and equals the performance of the PCIe ssds on of my clients recently fitted to their servers.

          Very fast, but believable.

        2. mce_grep: 21474836480 / 137 * 8 = 1254005048 == 1.2Gbits/s.

          That is faster than any single device or interface that I have heard of.

        3. egrep.pl:21474836480 / 26.477 * 8 = 6495961426 == 6.5Gbits/s.

          That's getting up there with the bandwidth of the PCI Express 3.1 specifications (8GT/s); but as yet there are no devices available that support that!

        4. mce_loop_script: 21474836480 / 25.650 * 8 = 6697804750 == 6.7Gbits/s.

          That would give the Intel QuickPath Interconnect processor internal bus a run for its money on some of the low-powered, low clock-speed processors.

        Sorry, but unless you have this file distributed across multiple spindles attached via multiple 16-lane PCIe cards; or maybe you're using a system that has 32GB of ram and you're pre-caching the file there as you were earlier; those numbers just don't add up.

        especially when you (being at the Pope level) seem to disprove of MCE.

        I don't disapprove of MCE.

        I can see that for tasks where the IO is a small part of the overall processing time, -- example: fuzzy searching for many substrings against huge DNA sequences -- MCE provides a much needed solution for distributing the processing against a common dataset that threads (because of the slowness and gratuitous memory usage of threads::shared) simply doesn't have a good solution to.

        For those types of processing, MCE is a breath of fresh air, and I applaud you for it.

        But the numbers you are posting for this single file, single pass, simple search application seem to defy the laws of Physics.


        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority". I'm with torvalds on this
        In the absence of evidence, opinion is indistinguishable from prejudice. Agile (and TDD) debunked

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1127600]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others examining the Monastery: (5)
As of 2024-04-23 15:28 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found