That was basically the original logic. Sadly, to find a tok that needs updating (rather than inserting) you need a primary/unique/index (or you have to iterate the list of tokens every time). BUT - and this is the problem - each time you insert a new tok the index gets redone (it needs to get redone too). Bottom line is runtime is hours using this approach as opposed to seconds using the in mem hash count approach. We have a lot of memory and spend it freely to gain speed, it is just in this case we can't.
FYI using this appraoch with a couple of million unique tokens to insert/update from a total of around 50 million has a runtime of 4.5 hours (16,000 seconds) on a quad xeon 2G server with fast RAID V disks. DBI leaks like a seive as well and the process blows out to 250MB after 50 million update/insert cycles.
Using an in memory hash keyed on tokens and incrementing the counts in the value, dumping the final hash to file, using native mysql import to get it into a table, then finally adding the primary key takes 128 seconds. As a bonus memry use is only 80 MB as you avoid the DBI leaks. Adding the primary key at the end takes about 10 seconds to generate the index. It is the update of this index with every iter that kills the speed of the pure DB approach.
cheers
tachyon
In reply to Re: Re: OT: MySQL combine 2 tables data when Perl 'fails'
by tachyon
in thread OT: MySQL combine 2 tables data when Perl 'fails'
by tachyon
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |