in reply to Re^2: Store a huge amount of data on disk
in thread Store a huge amount of data on disk
Whether it is fast enough depends, I think, as much on the disks on your system as on the software that you'll use to write to them.
From what you mentioned I suppose the total size to be something like 300 GB? It's probably useful/necessary (for postgres, or any other RDBMS) to have some criterium (date, perhaps) by which to partition.
(FWIW, a 40 GB table that we use intensively, accessed by unique id, gives access times of less than 100 ms. System has 32 GB, and a 8-disk raid10 array.)
Btw, postgresql *does* have a limit for text column values (1 GB, where you need 2 GB, but I suppose that could be avoided by splitting the value or something like that)
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Store a huge amount of data on disk
by Sewi (Friar) on Oct 18, 2011 at 18:41 UTC |