in reply to Re^2: Using kernel-space threads with Perl
in thread Using kernel-space threads with Perl
Create the threads first and then have each thread load just the data it needs (and don't share it, of course). Then there won't be extra copies of that stuff created.
- tye
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Using kernel-space threads with Perl (order)
by BrowserUk (Patriarch) on Mar 22, 2011 at 01:38 UTC | |
Would you care to expand that a little? Say, a little pseudo-code showing how you would manage the threads reading from the same file concurrently? Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
by tye (Sage) on Mar 22, 2011 at 02:18 UTC | |
I didn't see any mentions of files so I wasn't going to jump to the conclusion that "huge dataset" means some single huge file or even any files at all. "The job is embarrassingly parallelizable, so I was going to simply carve it up into pieces. The problem was getting each of the pieces into the threads. If I split up the data before hand, then all of it still gets copied into the threads." So the OP already has an idea of how to "carve up" the data and it would seem that reading in the data isn't unacceptably slow as-is, it is just the running out of memory when creating iThreads after that. So I don't see why you jump to the conclusion of wanting to read the data in parallel either. If you have your heart set on writing some pseudo code for loading the data, then you'll need to await the "more information" that you already asked for. In the mean time, the answer I provided may well be enough for the OP to adjust the way he already knows how to load the data so that much less memory is required. To amplify what JavaFan mentioned, there is a module, forks.pm that will allow "copy on write" sharing of the loaded data. Exactly how the data is loaded and used might mean that this is an insignificant advantage in the long run but it is also trivial to try (if you aren't running in MS Windows) and might make a huge difference. Having the parent read in the data and hand off each piece to the appropriate thread(s) (I'm guessing via Thread::Queue might be a good way) is the most general method that springs to my mind. I'd probably do something similar except using processes and simple pipes, as I've often done. - tye | [reply] |
by BrowserUk (Patriarch) on Mar 22, 2011 at 03:45 UTC | |
Having the parent read in the data and hand off each piece to the appropriate thread(s) (I'm guessing via Thread::Queue might be a good way) is the most general method that springs to my mind. Something like this maybe?:
Problem: That will take ~4 hours to process a 3.5 GB file. And that's with the output redirected to nul so there is no competition for the disk head. I'd probably do something similar except using processes and simple pipes, as I've often done. So something like this, but using processes instead of threads perhaps?:
This fares better and only takes ~15 minutes to process the 3.5GB. But all that effort is for naught as this:
processes the same 3.5GB in exactly the same way, but in less than 2 minutes. Now the "processing" in all these examples is pretty light, just a single pass of each record, but it serves to highlight the scale of the overhead involved in distributing the records. And the scale that the processing would need to involve to make either of these distribution mechanisms viable. The pipes mechanism is more efficient than the shared queues, and should be pretty much the same between processes as it is between threads, so I doubt there is much to be gained by going that route. Maybe you have some mechanism in mind that will radically alter the overheads equation, but there is an awful lot of in-the-mind's-eye-expertise around here, and having spent a long time trying to find a good way to do this very thing, I'd really like to be educated by someone who's actually made it work. Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] [d/l] [select] |
by tye (Sage) on Mar 22, 2011 at 18:26 UTC | |
by BrowserUk (Patriarch) on Mar 23, 2011 at 04:06 UTC | |
| |
by aberman (Initiate) on Mar 22, 2011 at 20:38 UTC | |
"I didn't see any mentions of files so I wasn't going to jump to the conclusion that "huge dataset" means some single huge file or even any files at all." Sorry, the data is in a single tab-delimited text file. I'm doing something similar to a correlation coefficient on terms assigned to genes in a large spread to generate similarity. For this, I'm using a kappa statistic, which involves pairwise comparisons between all genes and all annotations, it is a very large matrix. Thanks for the suggestions though. I'll try implementing them and get back to the thread | [reply] |