jimbus has asked for the wisdom of the Perl Monks concerning the following question:
Here's something I posted at forums.mysql.com. It's more of a mysql thing, but I am scriptong it in perl and you guys are infinitely more helpful then they are and a lot cooler, too. :)
I'm processing log files in perl and I'm using the timestamp as the primary key , but the files are segregated by another field, so timestamps can potentially be spread over more than one file... Originally, I was using the DBI interface to query on the timestamp, if it didn't exist, I inserted it, if it did, I summed the data and updated the record; which worked but was too slow.
So I googled a bit on MySql tuning and performance and found that the best way insert, speed wise, is to write the digested data to a CSV and bulk load it. What I need help with is how to duplicate my PK violation logic with this method.
One though I had was that I could write the file, run the bulk load, have it place error raising records in another file (I believe it will do that) and use my perl logic process the second signifigantly small file, but that seems like a hack... so I thought I would post and ask if anyone had a more elegant solution.
Thanks,
Jimbus
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: DBI vs Bulk Loading
by LanceDeeply (Chaplain) on Sep 14, 2005 at 22:03 UTC | |
|
Re: DBI vs Bulk Loading
by InfiniteLoop (Hermit) on Sep 14, 2005 at 22:06 UTC | |
|
Re: DBI vs Bulk Loading
by pboin (Deacon) on Sep 15, 2005 at 12:30 UTC | |
|
Re: DBI vs Bulk Loading
by eric256 (Parson) on Sep 15, 2005 at 16:29 UTC | |
|
Re: DBI vs Bulk Loading
by jimbus (Friar) on Sep 15, 2005 at 14:51 UTC | |
|
Re: DBI vs Bulk Loading
by revdiablo (Prior) on Sep 15, 2005 at 17:34 UTC |