in reply to Re^6: Working with large amount of data
in thread Working with large amount of data
If you've done things right, for large data sets your time is entirely dominated by the time to stream through data. So 1 file vs 100 files is irrelevant. But splitting directly into 100 files may be a horrible idea for the simple reason that disk drives are typically able to stream data at high rates to a fixed number of locations. Like 4 or 16. So you'd probably want to split the data in multiple passes if you went with this design. (That is not to say that this is the right design. Personally I head in the merge sort direction rather than using hashing.)
As for going to a database, my experience is that when your data sets are near the capacity of the machine, databases often will run into resource constraints and not figure a way out. It isn't that the query runs painfully slowly, it is that it grinds away for several hours then the query crashes. That is one of the prime reasons that I have needed to do end runs around the database when working with large data sets.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^8: Working with large amount of data
by salva (Canon) on Sep 22, 2009 at 07:53 UTC | |
by tilly (Archbishop) on Sep 22, 2009 at 15:13 UTC |