This is the approach I use with a different database which basically can only do bulk loads, because single INSERTs are very slow. The approach is:
- Read input data using Text::CSV_XS
- Perform cleanup on each row
- Add relevant columns (source_file, source_date)
- Write output TSV file in the correct encoding (Text::CSV_XS, this time writing tab-separated output)
- Issue bulk load statement
- Check that the number of rows read in 1, written in 4 and retrieved from the database are as expected
| [reply] [d/l] [select] |
As I don't need to add/delete columns, I guess I just can make a cleanup using INFILE edit as I'm doing now and then LOAD INTO as you proposed.
There is no real added value of using TEXT:CSV_XS, or am I missing something?
Actually, if I look to my code, I'm using TEXT::CSV_XS for no reason :). Maybe just wanted to be fancy :)
| [reply] |