in reply to Re^3: Speed up file write taking weeks
in thread Speed up file write taking weeks

Let us move from here to the original problem

Two tables used as "input" to an SQL:

1. T_1_DAT ( t_1_id* , t_1_data_1 ) - 65 million rows - * = primary key

2. T_2_KEY ( t_2_id@ , t_2_data_2 ) - 72 million rows - @ = part of primary key and an index on this field alone

SQL

insert in table T_3_XYZ ( t_3_id , t_3_data_1 , t_3_data_2 ) select distinct t_1_id , t_1_data_1 , t_2_data_2 from T_1_DAT , T_2_KEY where t_1_id = t_2_id

Works for databases with multi column distinct facility. 1.7 trillion rows generated if the unique is removed. We are guessing 100 million if unique.

Now that we are doing it the flat file method, we are "joining" the two files, generating the 1.7 trillion output in instalments. After this we will sort it & "coalesce" it - in stages I guess!

Replies are listed 'Best First'.
Re^5: Speed up file write taking weeks
by dave_the_m (Monsignor) on Jul 02, 2019 at 14:12 UTC
    Well I can't comment about improving the SQL since that's not my expertise, but starting from flat files and using perl, I suggest:

    Sort both input files by key (either when exporting them from the DB or as a post-export step. Then get perl to read the first line from each of the files and extract its key. If the keys don't match, read another line from the file whose key sorts earlier. Once you have identical keys from both files, read in lines from both files while they still have the same key. At this point you will have extracted two lists of data values. From your original example, for the key 'D' they will be (x, y) and (m,n,o). De-duplicate each list individually (assuming that such duplicates are possible) using a pair of hashes. Then calculate the cartesian product of those two lists, and use a third hash to de-duplicate. Then output to a third file a series of lines containing the same key and the various data pairs.

    Then go back to step reading new lines from each file.

    Dave.

Re^5: Speed up file write taking weeks
by Corion (Patriarch) on Jul 02, 2019 at 13:40 UTC

    Does the performance improve if you change the query to use proper JOINs instead?

    insert in table T_3_XYZ ( t_3_id , t_3_data_1 , t_3_data_2 ) select distinct t_1_id , t_1_data_1 , t_2_data_2 from T_1_DAT inner join T_2_KEY on t_1_id = t_2_id

    Ideally the query optimizer would recognize that the original WHERE clause is identical to the INNER JOIN clause, but maybe it doesn't?