in reply to DBI INTERVAL Error

Question is; is INTERVAL not part of DBI SQL? Any other suggestions?

It's not to do with the DBI, it's the driver you are using. See the error message. DBD::CSV doesn't support date datatypes.
And afaict INTERVAL is only supported by Oracle.

But as the error message also indicates, DBD::CSV uses SQL::Statement as its SQL engine, and you can implement your own SQL functions as explained in Statement/Syntax.pod#Extending_SQL_syntax_using_SQL.

On the other hand, depending on how large your data set is, it might be simpler to implement the filter in Perl and apply it as you fetch rows from the statement handle or in building the query as shown by poj.

Hope this helps!


Edit: correction
The way forward always starts with a minimal test.

Replies are listed 'Best First'.
Re^2: DBI INTERVAL Error
by Marshall (Canon) on Mar 20, 2016 at 18:03 UTC
    This doesn't help the OP, but it appears that in addition to Oracle, MySQL also has INTERVAL: https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html.

    I like the idea of using Perl as a "filter helper" function. I don't use the DBI often, but when I do, I try to make things SQLite compatible and "help" it with Perl when necessary. The result usually will work on many DB's.

    As another thought, it is of course possible for Perl to write and then run SQL. Read some row, then construct an SQL statement with dates calculated by Perl, avoiding the INTERVAL gizmo. It is odd to have a program write another program (SQL statement), but this can work. In that case you wind up with different kind of Perl code.

    As another weird thought, since we are talking about CSV... Spreadsheets work very well with that format. Its been more than a decade, but I did have one project where I automated a spreadsheet with macros, then automated Word with macros to make a fancy looking management report of the result. This of course was not "efficient" CPU wise, but it was efficient use of my time each week. Of course mileage varies, but sometimes this is just a matter of "getting the result" and to the heck with the MIPs that it takes. How big these .CSV files are and the scope of work is of course unknown to me.