in reply to OT: MySQL combine 2 tables data when Perl 'fails'

I can only think of the long-winded way. That would be
1. Read through Table A - insert Table C record with 'tok' and 'count'. 2. Read through Table B - for each record in Table B - find Table C record using 'tok' - if Table C record found, add to existing Table C 'count' - if Table C record NOT found, create new table C record with 'tok' and 'count'
When that finishes you should have a good Table C.

HTH.

Replies are listed 'Best First'.
Re: Re: OT: MySQL combine 2 tables data when Perl 'fails'
by tachyon (Chancellor) on Feb 10, 2004 at 18:44 UTC

    That was basically the original logic. Sadly, to find a tok that needs updating (rather than inserting) you need a primary/unique/index (or you have to iterate the list of tokens every time). BUT - and this is the problem - each time you insert a new tok the index gets redone (it needs to get redone too). Bottom line is runtime is hours using this approach as opposed to seconds using the in mem hash count approach. We have a lot of memory and spend it freely to gain speed, it is just in this case we can't.

    FYI using this appraoch with a couple of million unique tokens to insert/update from a total of around 50 million has a runtime of 4.5 hours (16,000 seconds) on a quad xeon 2G server with fast RAID V disks. DBI leaks like a seive as well and the process blows out to 250MB after 50 million update/insert cycles.

    Using an in memory hash keyed on tokens and incrementing the counts in the value, dumping the final hash to file, using native mysql import to get it into a table, then finally adding the primary key takes 128 seconds. As a bonus memry use is only 80 MB as you avoid the DBI leaks. Adding the primary key at the end takes about 10 seconds to generate the index. It is the update of this index with every iter that kills the speed of the pure DB approach.

    cheers

    tachyon

      I have a not dissimilar problem starting back some years ago - 1996 to be precise. We had a series of huge tables in Access - but of course it was severely limited. So in the end we used a perl script to read all the tables and merge them. As you have found, it is vastly faster.

      Every year or so the problem comes back to haunt us, and this time we tried doing the merge in MySQL, on a machine similar to yours and the job takes about 7 hours. But using the in memory HASH technique - well, about 15 minutes. My task is to take about 1 million personal records that come to us from a variety of sources within a large corporation which has no integrated client management systems. Each record from each source has different data in it! So we use the script to try and build a complete single record for each person and send it back to the individual sources.

      We are using a dual-Xeon system (Intel SE7505VB2 motherboard) with 4GB of RAM. RH9.0 and MySQL 4.0.16 with Perl 5.8.1

      jdtoronto

      I haven't been following the DBI list recently, but have you raised the leak issue as a DBI or DBD::mysql bug? I'm sure Tim Bunce(DBI author) and/or the current DBD::mysql author would be interested to know about that (if they don't already ;-)

        I have updated to the latest versions of DBI and DBD-mysql and will report it if they still leak.

        cheers

        tachyon