in reply to Implementing a buffered read-and-insert algorithm

The wise monks have already replied to your answer,
1. Bulk insert is the fastest way to Insert large data.
2. Remove Indices before Inserting and later recreate the Indices.
3.perl DBI does not support Bulk Insert so you have to use the option provided by the Database (this is different for each database ..)

Also see if DBD::SqLite can be of help ,As per its documentation this is faster than mysql.
  • Comment on Re: Implementing a buffered read-and-insert algorithm

Replies are listed 'Best First'.
Re^2: Implementing a buffered read-and-insert algorithm
by mpeppler (Vicar) on Dec 14, 2004 at 08:39 UTC
    DBI does not support Bulk Insert
    True, though DBD::Sybase (in it's development version, and in 1.05 once it is released) supports an experimental access to Sybase's BLK API, and this can speed up inserts by a tremendous amount.

    Michael

      Yes I remember reading in the DBI list that you were working on Implementing this. Any Idea if the bulk insert option will be done for the other Databases as well ?
        The problem is that Bulk Load APIs are very database-specific. For DBD::Sybase I managed to squeeze it so that it looks to the DBI user almost as a normal INSERT prepare()/execute() loop, but it might not be so easy for other drivers to make this conform to the normal DBI API.

        And of course if you happen to need it for a particular database nothing's stopping you from coding the appropriate behavior into the driver :-)

        (yes, yes, I know - it can be tricky to understand how the DBI internals work, and how that particular driver implements things...)

        Michael