tachyon has asked for the wisdom of the Perl Monks concerning the following question:

I have what seems like a pretty simple problem. I want to combine table A with table B to get table C

table A tok count a 1 b 1 table B tok count b 1 c 1 table C (desired) tok count a 1 b 2 c 1
Constraints.
  1. The end goal is speed
  2. I have to use MySQL
  3. I have 2GB of RAM but the tables are so big (multiple millions of rows) reading into a Perl hash and incrementing count there is not an option. We do this already with 'small' < 2GB sets of data as it drop the runtime from hours to minutes
  4. tok is a primary key

What I want to do is encapsulated in this pseudocode

INSERT INTO all_tok (tok, tok1.count+tok2.count) SELECT tok, count from tok1 UNION SELECT tok, count from tok2

I need a union not a join as the final table needa to contain all the toks (at least that is what I think). If a tok exists in one table but not the other I just want the count from the table it exists in. If it does exist in both tables I need the sum of the counts. I have beaten my head against the proverbial wall for some hours. It didn't help, the only good thing was it felt good when I stopped ;-).

Any suggestions about approaches to this seemingly simple task gratefully received. Temp tables are fine.

cheers

tachyon

Replies are listed 'Best First'.
Re: OT: MySQL combine 2 tables data when Perl 'fails'
by ChrisR (Hermit) on Feb 10, 2004 at 18:07 UTC
    Can you use a merge table?
    CREATE TABLE merge_table (table definition) TYPE=MERGE UNION=(table1,t +able2) SELECT tok, COUNT(tok) FROM merge_table GROUP BY tok

      Update

      With the addition of the code I can see exatly what you mean. Similar to my inelegant hack below. Thanks++

      I may be being obtuse but I don't quite see how. Essentially we have to 'join' the two tables so we can sum the counts (ie b occurs with a count of 1 in each of the original tables so need to have a count of 2 in the final table, whereas a and c retain their counts of 1)

      With a merge we will just get 2 rows of b.

      This works (it is a kind of a merge) but is not very efficient:

      mysql> select * from tok1; +-------+-------+ | token | count | +-------+-------+ | a | 1 | | b | 2 | +-------+-------+ 2 rows in set (0.00 sec) mysql> select * from tok2; +-------+-------+ | token | count | +-------+-------+ | b | 1 | | c | 4 | +-------+-------+ 2 rows in set (0.00 sec) mysql> insert into all_tok select token,count,0 from tok1 union selec +t token,0,count from tok2; Query OK, 4 rows affected (0.01 sec) Records: 4 Duplicates: 0 Warnings: 0 mysql> select * from all_tok; +-------+--------+--------+ | token | count1 | count2 | +-------+--------+--------+ | a | 1 | 0 | | b | 2 | 0 | | b | 0 | 1 | | c | 0 | 4 | +-------+--------+--------+ 4 rows in set (0.00 sec) mysql> select token,sum(count1+count2) from all_tok group by token; +-------+--------------------+ | token | sum(count1+count2) | +-------+--------------------+ | a | 1 | | b | 3 | | c | 4 | +-------+--------------------+ 3 rows in set (0.00 sec) mysql>

      We need the two columns for count so we have b,0,N and b,N,0 so the union does not obliterate one of the b's.

      cheers

      tachyon

        create table test(tok char(1) not null, count int(1) not null, key(tok +)) create table test2(tok char(1) not null, count int(1)not null, key(tok +)) create table test_merge (tok char(1)not null, count int(1) not null, k +ey(tok)) type=merge union=(test,test2) SELECT tok, sum(count) FROM `test_merge` WHERE 1 group by tok
Re: OT: MySQL combine 2 tables data when Perl 'fails'
by hmerrill (Friar) on Feb 10, 2004 at 18:37 UTC
    I can only think of the long-winded way. That would be
    1. Read through Table A - insert Table C record with 'tok' and 'count'. 2. Read through Table B - for each record in Table B - find Table C record using 'tok' - if Table C record found, add to existing Table C 'count' - if Table C record NOT found, create new table C record with 'tok' and 'count'
    When that finishes you should have a good Table C.

    HTH.

      That was basically the original logic. Sadly, to find a tok that needs updating (rather than inserting) you need a primary/unique/index (or you have to iterate the list of tokens every time). BUT - and this is the problem - each time you insert a new tok the index gets redone (it needs to get redone too). Bottom line is runtime is hours using this approach as opposed to seconds using the in mem hash count approach. We have a lot of memory and spend it freely to gain speed, it is just in this case we can't.

      FYI using this appraoch with a couple of million unique tokens to insert/update from a total of around 50 million has a runtime of 4.5 hours (16,000 seconds) on a quad xeon 2G server with fast RAID V disks. DBI leaks like a seive as well and the process blows out to 250MB after 50 million update/insert cycles.

      Using an in memory hash keyed on tokens and incrementing the counts in the value, dumping the final hash to file, using native mysql import to get it into a table, then finally adding the primary key takes 128 seconds. As a bonus memry use is only 80 MB as you avoid the DBI leaks. Adding the primary key at the end takes about 10 seconds to generate the index. It is the update of this index with every iter that kills the speed of the pure DB approach.

      cheers

      tachyon

        I have a not dissimilar problem starting back some years ago - 1996 to be precise. We had a series of huge tables in Access - but of course it was severely limited. So in the end we used a perl script to read all the tables and merge them. As you have found, it is vastly faster.

        Every year or so the problem comes back to haunt us, and this time we tried doing the merge in MySQL, on a machine similar to yours and the job takes about 7 hours. But using the in memory HASH technique - well, about 15 minutes. My task is to take about 1 million personal records that come to us from a variety of sources within a large corporation which has no integrated client management systems. Each record from each source has different data in it! So we use the script to try and build a complete single record for each person and send it back to the individual sources.

        We are using a dual-Xeon system (Intel SE7505VB2 motherboard) with 4GB of RAM. RH9.0 and MySQL 4.0.16 with Perl 5.8.1

        jdtoronto

        I haven't been following the DBI list recently, but have you raised the leak issue as a DBI or DBD::mysql bug? I'm sure Tim Bunce(DBI author) and/or the current DBD::mysql author would be interested to know about that (if they don't already ;-)
Re: OT: MySQL combine 2 tables data when Perl 'fails'
by flounder99 (Friar) on Feb 10, 2004 at 20:13 UTC
    I think you want UNION ALL. This code works on ORACLE, I'm not sure about MySQL.
    INSERT INTO c ( SELECT tok, sum(count) FROM ( SELECT tok, count FROM a UNION ALL SELECT tok, count FROM b ) GROUP BY tok )

    --

    flounder

      Sadly MySQL is not Oracle and has only basic support for sub-selects. It is pretty annoying at times.

      cheers

      tachyon

        Just use a temporary table
        create table d SELECT tok, count FROM a UNION ALL SELECT tok, count FROM b; insert into c select tok, sum(count) as count from d group by tok; drop table d;

        --

        flounder