Thank you all for at least looking
This is a population estimate table with a maximum population of (x axis) of 17000 for which each one has random number (6 digits) assigned to it (among other things) over 8400 years.
I've already done some rearranging of subs (like generating the randoms one row at a time.)
I was thinking some DB management might be helpful simply because I know those can get huge.
What I will probably do is break the output into interlocking chunks so that each chunk comes in at 20-40 MB instead of one output file at 400 MB.
When this project started, I was told it would be 2000 wide and 2000 tall - no problem. Then today I got the actual data - 17000 wide and 8400 tall.
Luckily this is NOT a web app - I spent a week learning Perl/Tk so it could run from a C prompt.
In reply to Re: Handling HUGE amounts of data
by Dandello
in thread Handling HUGE amounts of data
by Dandello
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |