in reply to Implementing a buffered read-and-insert algorithm

I went through the same learning curve recently.

Lesson learned the hard way: Do not do mlutiple inserts unless you have all day to wait...

Use "LOAD DATA INFILE ". This is an sql utility specifically for importing csv data into tables. It's very very fast. Example? My linewise-insert programming that strangely ressembled yours was working at ~300 file lines per hour with a complex line-analysis intermediate step on a ~1.5m line file (650Mb). Once I had completed the intermediate line-wise analysis with another program so the data was pre-formatted ready for importation, it took about a half hour for LOAD DATA INFILE to import it all.

In your case this is going to be childishly simple, since it appears you have only one table to insert to:

$sql = 'LOAD DATA INFILE $filename.csv INTO TABLE T_$;';
There are options available for field terminators and line terminators. Check out

http://dev.mysql.com/doc/mysql/en/LOAD_DATA.html

Forget that fear of gravity, Get a little savagery in your life.

Replies are listed 'Best First'.
Re^2: Implementing a buffered read-and-insert algorithm
by radiantmatrix (Parson) on Dec 13, 2004 at 21:52 UTC

    I'm chugging through 1.6M lines in about 53m. Also, most of my reasons for not using a built-in file loader have to do with portability and expandability. See this reply for more detail on the matter.

    And, of course, part of it is a desire to learn more about neat things I can do with Perl. ;-)

    radiantmatrix
    require General::Disclaimer;
    s//2fde04abe76c036c9074586c1/; while(m/(.)/g){print substr(' ,JPacehklnorstu',hex($1),1)}