Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW
 
PerlMonks  

Re^2: Optimise file line by line parsing, substitute SPLIT

by vsespb (Chaplain)
on Jun 03, 2013 at 13:54 UTC ( [id://1036761]=note: print w/replies, xml ) Need Help??


in reply to Re: Optimise file line by line parsing, substitute SPLIT
in thread Optimise file line by line parsing, substitute SPLIT

That's true, but not always.

Sometimes you don't need process even 1% of the data.

You just read it, split, and drop 99.9% of lines where field1 <> 'abcd' (that's where SQL can help)

Or you read webserver logs into memory hash (grouped by IP) and then you do scoring of new site visitors in real time (and you need access only records related to particular IP) (and SQL will be slower)

Or maybe you read list of files from text file, and read filelisting from disk, and then compare it in memory (no use for SQL)

Or general case - you read data from text files (1M lines) and skip all records which are not already in another memory hash (there are 10K lines)

  • Comment on Re^2: Optimise file line by line parsing, substitute SPLIT

Replies are listed 'Best First'.
Re^3: Optimise file line by line parsing, substitute SPLIT
by BrowserUk (Patriarch) on Jun 03, 2013 at 14:11 UTC

    When you post code that does any one of those things you cite, more quickly than you can read the file and do nothing, I'll stump up for a nice polyurethane "Code Magician of the Year" award and send it to you.


    With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
    Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
    "Science is about questioning the status quo. Questioning authority".
    In the absence of evidence, opinion is indistinguishable from prejudice.
      more quickly than you can read the file and do nothing

      That does not have to be more quickly, just comparable time. 20%-30% is already significant.

      Also, concept that whole application run time (from start to finish) is significant is a bit wrong.

      Often startup time (when actually file is read) is significant, and after startup application is actually doing something useful (and can be blocked by disk/network IO or waiting for user action) till system reboot

      Do you want me paste code where split() taking more than 20% of time when I just read file to memory and skip some/most of records ?

        Do you want me paste code where split() taking more {blah}

        I want you to post code -- directly comparable to the OPs -- where doing something takes longer than doing nothing.

        But, if you really want to play, show me code that filters a 2 million line x 11 TAB separated fields, file on the value of a field whose number and filter value I supply on the command line, more quickly than:

        #! perl -slw use strict; use Time::HiRes qw[ time ]; our $FNO //= 6; our $V //= 500; my $start = time; my @filtered; while( <> ) { my @fields = split( "\t", $_ ); $fields[ $FNO ] == $V and push @filtered,$_; } printf "Took %f seconds\n", time() - $start; printf "Kept %u records\n", scalar @filtered; __END__ C:\test>1036737 -FNO=6 -V=500 < numbers.tsv Took 19.072147 seconds Kept 2005 records C:\test>1036737 -FNO=6 -V=500 < numbers.tsv Took 19.021369 seconds Kept 2005 records

        With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
        Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
        "Science is about questioning the status quo. Questioning authority".
        In the absence of evidence, opinion is indistinguishable from prejudice.
        /blockquote
Re^3: Optimise file line by line parsing, substitute SPLIT
by hdb (Monsignor) on Jun 03, 2013 at 14:17 UTC

    In this kind of application, try to filter first e.g. next unless /^abcd/ and only split if you need the fields separated.

      Yes, sure, if I filter by first field. Otherwise split+regexp will be slower than just regexp or than split+comparsion.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://1036761]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others cooling their heels in the Monastery: (6)
As of 2024-04-18 22:12 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found