in reply to character-by-character in a huge file
Please see Optimising processing for large data files. for some techniques for reducing your runtimes by 96%, if your application doesn't lend itself to using Bioperl libraries.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: character-by-character in a huge file
by mushnik (Acolyte) on Apr 13, 2004 at 05:59 UTC | |
A quick aside: as you noted, it's been suggested that I use bioperl to tackle my problem. I use bioperl nearly every day in many various ways...when it's tools fit my need. In this case, bioperl offers nothing relevant (fasta format is trivial, and there's no facility in bioperl to "read character by character, really quickly"). Sadly, the optimizations you've suggested don't work nearly as well for me as they do for you. The first part of the benchmark code (below) deals with this problem. For simplicity, I've removed the bit about sliding window. It's something I need to deal with, but it's not really at the heart of my problem. In this test, I just check to see how long it takes to get access to every character. The heart of the problem is that I need to read the value of each character in the file one-at-a-time (the "why" is directly related to the sliding window: I can either recalculate the character counts of the window each time I slide over one space, or I can just consider the character being left behind by the slide, and the new character being added). But accessing each individual character from a file is dreadfully slower in perl than just slurping the file off disk. This is shown with the second part of the benchmark: ... (running Redhat Linux 9.0. perl 5.8.0, testing with a 2MB file)
As you can see from the first test, adding the "raw" option doesn't help, and sysread generally slows things down doesn't seem to help at all. In general, the best performance I can get is by slurping a bunch of content with the standard $line=<FH> syntax, then indexing into the string using substr. But as you can see from the 2nd benchmark, the performance of this option is actually pretty terrible. I can read in the entire contents of the file (or even test with regex, without assigning individual characters to an accesible variable) in 1/30th the time it takes to look at each character in the file. That's much worse than what I've seen (but haven't tested here) in C. I still hold out hope that there's a faster option - and based on your complete treatment of the topic in your original post, hope you'll either a) see what I'm doing wrong, or b) come up with another speedup idea. Thanks again, Travis | [reply] [d/l] |
by BrowserUk (Patriarch) on Apr 13, 2004 at 12:14 UTC | |
a) see what I'm doing wrong, The first thing you are doing wrong is that you are comparing apples and oranges. Take your 2nd benchmark.
The first rule of benchmarking is that you need to ensure that you are comparing like with like. The second rule is that you need to make sure that what you are benchmarking is useful and relevant to your final program. In these tests, you are doing neither. That's much worse than what I've seen (but haven't tested here) in C. If your point is that Perl is slower than C. Your right. Examine what is said, not who speaks.
"Efficiency is intelligent laziness." -David Dunham"Think for yourself!" - Abigail | [reply] [d/l] [select] |
by mushnik (Acolyte) on Apr 13, 2004 at 15:49 UTC | |
But I'm not sure how this is apples and oranges. There are two benchmark tests here: In the first, after the bug fix, the raw/sysread buffer approach works best, but is only about 10% better than just slurping in the contents with (<FH>). (and the raw/sysread_onechar approach is actually worse than getc). In general, your final result shows improvement, but it's not as fantastic as I'd hoped. Perhaps this is a function of the OS in use (such dramatic differences between your results and mine suggest that you may be using Windows (I'm on Linux)...and that getc may be really terrible in Windows - is that right?). I'd be interested in seeing the results you get when you run the same benchmark (after fixing the $i bug you mention). In the second benchmark, my point is a bit more interesting (to me) than simply saying that Perl is slower than C. I've reposted, comparing my two "fast" approaches to raw_sysread_buffer. The point of &slurp_length and &slurp_simpleregex is to show that the thing that makes &raw_sysread_buffer (and the others) so remarkably slow is not the actual act of reading from disk, but the act of accessing the values one at a time. For example, the regex test simply aims to show that I must have read the entire block from disk (I got the last character). In these tests, I'm not meaning to say that Perl is slower than C, I'm saying that Perl (as I'm using it) is unbearably slower than expected (by me). This amazing slowness in this one application is surprising to me, because I've generally found Perl to be pretty darned fast. This has been especially true in dealing with text (i.e. regex). A couple small notes: I have no interest in writing this in C. Perl is my preferred language, and it was my intention to show the C-lovers I work with that Perl is a perfectly good tool for this sort of task. I'm having a (much) harder time proving that than I'd hoped I would. Perhaps I'm wrong :( I also have no intention of flaming you with my response. It's clear to me that you've taken a good deal of time to think about my problem, and I'm most appreciative of that time. The intention of my response is simply to show that my benefits don't match your expectations, and to see if you can suggest another approach. | [reply] [d/l] |
by BrowserUk (Patriarch) on Apr 13, 2004 at 16:29 UTC | |
by mushnik (Acolyte) on Apr 13, 2004 at 17:14 UTC | |