in reply to A database table size issue

Echoing (and upvoting) Roboticus' suggestions, I have found that there can simply be tremendous improvements to be had when the TCP/IP communication link (no matter how fast it is) is eliminated from the processing picture.   If you’ve got a lot of data to be processed, first get it in table-form (or if necessary, flat-file form) to that server or to another server that might be connected to it, say, by an optical fiber link, e.g. a SAN or what-have-you.   Then, do the processing as “locally” as you possibly can, provided of course that you do not overload a server that has been dedicated to and configured for a particular task, with a task for which it is not intended.

Then, construct your processing in a way that features these considerations, for instance:

Note that in the following as originally writ, I assumed the update file was 30GB.   As far as database-tables go, 30GB is merely “moderate size.”   The basic principles pontificated :-) here, still hold.

Replies are listed 'Best First'.
Re^2: A database table size issue
by sophate (Beadle) on Apr 29, 2012 at 00:14 UTC

    Thanks for your detailed and in-depth reply. Really appreciate the help from you and other monks.