in reply to Parse Logfile and Bulk Insert

or do a bulk insert (it should support all the databases)

whew. As mentioned previously, the servers tend to differ in the details about how their bulk loaders work. But if you're currently supporting just three "brands" of servers, the mechanics are manageable, and with a sensible design involving config files and maybe brand-specific modules as well, it should be easily expandable. If you're supporting the same basic table schema across the different brands, it should be even easier.

One thing you should be able to count on is that all of them will be able to use the same basic CSV format for the actual rows of data, and you'll be fine with that so long as you're working with "basic" (lowest-common-denominator) data types (shouldn't be a problem, if the input comes from logs).

The differences involve how you convey field names and other control parameters to the bulk loader, the path/name of the bulk-loader program, what sort of command-line usage it supports, how it handles error conditions, logging, etc. Read each of the bulk loader manuals and practice, practice, practice. I think the payoff will be worth the effort -- I've only had experience with Oracle in this regard, but it's native bulk loader really is a lot faster than straight inserts with DBI, no matter how you tweak it.