in reply to Re^2: Parallel computing with perl?
in thread Parallel computing with perl?
comparing two huge datasets, operating on one or both or several datasets ... and do some operation on these data.
Which is it? One, two or if more, how many more?
Do those operations modify the original data?
If so, do other concurrent processes need to see those changes as they happen?
Each file is about 40 million lines (data points)
And how long are those lines? Ie. What is the total size of the file(s)?
Do the algorithms involved require random access to the entire dataset(s)? Or sequential access? Or random or sequential access to just some small subset for each top level iteration?
All of these questions have a direct influence upon what techniques are applicable to your application (regardless of the language used). And each answer will probably lead to further questions.
Your best hope of getting a good answer to your best way forward, would be to describe the application in some detail, noting volumes of data and times taken for existing serial processing. Making an existing serial processing script (or at least a fairly detailed pseudo-code description if the application is overly large or proprietary), would be better still.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Parallel computing with perl?
by necroshine (Acolyte) on Dec 16, 2008 at 11:52 UTC |