Beefy Boxes and Bandwidth Generously Provided by pair Networks
Your skill will accomplish
what the force of many cannot
 
PerlMonks  

Re: is perl the best tool for this job?(emphatically Yes!)

by BrowserUk (Patriarch)
on Oct 19, 2003 at 16:08 UTC ( [id://300393]=note: print w/replies, xml ) Need Help??


in reply to is perl the best tool for this job ?

Perl wins hands down (IMO) for processing fixed length data.

The following interactive session shows me slurping a 100 MB file into memory and scanning the 6.5 million, 128-bit chunks searching for a bit pattern consisting of 0xffffffffffffffffffffffffffffffff. The pattern is never found, but the entire search took just 49 seconds on my 233MHz machine.

open F, '<', 'e:\hugefile' or die $^E; binmode F; $file = do{ local $/=\(100*1024*1024); <F> }; print length $file; 104857600 print scalar localtime; for( my $i=0; $i< (100*1024*1024); $i+=16 ) { print $i if substr($file, $i, 16) eq ("\xff" x 16) }; print scalar localtime; Sun Oct 19 16:00:32 2003 Sun Oct 19 16:01:21 2003

The whole 'program' took maybe 3 or 4 minutes to write. Try doing that with C :-).

The total memory usage for the program was a little over 110MB.

The secret when handling large lumps of fixed length data, is not to break them up into an array of 6.5 million little strings. Instead, leaving the data in a single large string and just indexing into it using substr allows very memory and cpu efficient processing.

You can then use index to search very quicky for fixed patterns, and even regexes applied to the 100MB as a whole, or to the individual fields using substr as an lvalue.

Your C program may ultimately be a tad more efficient, but I bet it takes you longer to write.

If you don't have 100MB of ram to spare, processing the file in one pass in a while loop, by setting $/= \(16); is also very fast, and if you need random access, seek and tell make this easy also.


Update Whilst the code above works well, a few minor tweaks make it run substantially quicker.

First.

my $file = do{ local $/=\(100$1024*1204); <FILE> };

causes a peak memory usage substantially greater than is required. I can only assume that this is because an intermediate buffer is used somewhere. Whilst the extra memory is quickly returned to the OS (under Win anyway), this can be avoided by recoding it as

my $file; { local $/ = \(100*1024*1024); $file = <FILE>; }

This has the additional benefit that the load time for the 100MB, from a compressed file on a so-so speed disk, is cut from 15 seconds to 5 seconds.

Additionally, whilst looping over the string with substr in 16-byte chunks was pretty quick. Using index to search the whole string in a single pass is substantially quicker. An order of magnitude quicker in fact at 4 seconds!

This does mean that if a match is found, you would have to test the position returned by index modulus 16 to ensure the match didn't span a 16 byte boundary, but the performance gain make the housekeeping worthwhile.

#! perl -slw use strict; open F, '<', 'e:\100MB' or die $!; binmode F; print 'Slurping... ', scalar localtime; my $file; { local $/ = \(100*1024*1024); $file = <F>; } print 'Slurped. Searching...', scalar localtime; print 'Not found' unless index( $file, "\xff" x 16 ); print 'Searched. ', scalar localtime; __END__ P:\test>junk2 Slurping... Mon Oct 20 00:29:21 2003 Slurped. Searching...Mon Oct 20 00:29:26 2003 Searched. Mon Oct 20 00:29:30 2003

Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham
"Think for yourself!" - Abigail
Hooray!

Replies are listed 'Best First'.
Re: Re: is perl the best tool for this job?(emphatically Yes!)
by spurperl (Priest) on Oct 20, 2003 at 07:09 UTC
    Thanks for your very insightful answer ! PerlMonks is such a great place because people like you take the time to research into questions and give the great replies as this :-)

    A few points:

    My machine is Win2k on P4 1800 GHz and 256 MB of memory.
    The file loading takes 22 seconds. The first search method (loop with substr takes 2 minutes), the second takes 1 second. What concerns me though, is that neither reported that the sequence "wasn't found" in a completely random binary file (the probability of 0xff x 16 to appear is 2E-128, quite unlikely).
    I wonder about the speed differences... what makes my program run slower on a far stronger PC ? Bad Perl implementation (Activeperl 5.6) ?

    Also, could you please elaborate on the use of the following line:

    $/ = \(100*1024*1024)

    I only used $/ as in "undef $/" to read a file wholly, or to set a record separator. What does your line do ?

    ----------- Update ------------

    I think I figured out the performance problem. My PC usually runs at 190 MB memory usage, which means that the 100MB slurped threw it into the "virtual realm", which naturally downgrades performance.
    Now I read as follows:

    until (eof(FH)) ... read(FH, $val, 128); ...
    And then just compare to the wanted value. The whole process (which now combines reading & testing) takes 28 seconds.

    By the way, I was wrong about other thing. You first method (with substr) works correctly - it doesn't find the string. The second method (with index) seems to do find a string, or at least it thinks so, which is probably wrong (look at your test output, it's obvious there too)

      Sorry, I should have explained that.

      When slurping a complete file, I've found that it pays huge dividends to tell perl how big the file is by setting $/ = \(filesize);. This allows perl to pre-allocate the required memory in a single request to the OS, and then read the file in a single call to the OS.

      If you just set $/ = undef;, perl doesn't know how big a file it is to read, it therefore allocates a pre-determined lump of memory (it appears to be about 16k on my system) and then reads to fill it. It then checks to see if there is more, extends the buffer by 16k and reads the next 16k chunk. With a 100MB file, that requires 6400 allocations/reallocations (with copying) and 6400 reads.

      Needless to say, this is extremely slow with very large files. I'd like to give a comparitive figure here, but I've never had the patience to wait long enough for it to complete on my system. I set it going about 5 minutes before starting typing this reply and it still hasn't finished. A crude measure going by the task manager memory for the process suggests that after 8 minutes it has read around 25%. However, as the amount of memory required at each reallocation grows by 16k each time, so the amount of memory copied each time also grows (slowly). The effect is that the next 25% will take considerably longer than the first, and the next considerably longer again. And the last 25% longer still.

      This is one of those situations where giving perl a helping hand by supplying a little extra information has huge performance bonus.


      With respect to using index. Remember, that it is possible that it will find a pattern that doesn't actually exist in any of your 16-byte chunks. If the last N bytes of one chuck combine with first (16-n) bytes of the next chunk to produce the pattern you are looking for, then index will find that combination and report success.

      It therefore becomes necessary to verify that a 16-byte chunk boundary isn't being straddled. This is as easy as

      my $p = 0; while( ($p % 16) != 0 ) { $p = index( $data, $bit_pattern, $p ); } print "Chunk with bit_pattern: $bit_pattern found at: $p" unless $p = -1;

      That could be done better, but it shows what I mean.


      Examine what is said, not who speaks.
      "Efficiency is intelligent laziness." -David Dunham
      "Think for yourself!" - Abigail
      Hooray!

      $/=$number; tells perl to read that number of bytes instead of lines. so 100*1024*1024 = 100 megs.

      read(FH, $val, 128);

      Please keep in mind that read FILEHANDLE,SCALAR,LENGTH

      'Attempts to read LENGTH bytes of data ...' (emphasis mine - from perldoc -f read)

      You mentioned that you're working with 128 bit 'frames', so you'll want to do 16 byte reads in order to get a 'frame'.

      --
      3dan

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://300393]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others admiring the Monastery: (4)
As of 2024-04-16 19:57 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found