in reply to Re^2: Speed up file write taking weeks
in thread Speed up file write taking weeks
If the final result is the generation of these 100 million "unique records" (whatever that means), what is your plan for doing that from this humongous flat file of 1.7 trillion records? A factor of millions is a lot!
It is plausible to have an SQL DB with 65 + 72 million records. If those 2 tables combine to produce a smaller table (less than the sum of the input rows) of 100 million, I suspect there is a much more efficient algorithm to do that. However, I just don't know enough about what you are doing! My gosh what will you do with this 1.7 trillion record file after you generate it? How will you arrive at the 100 million unique records?
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: Speed up file write taking weeks
by Sanjay (Sexton) on Nov 22, 2019 at 16:00 UTC | |
by Marshall (Canon) on Nov 30, 2019 at 01:46 UTC |