There is no actual standard definition of CSV so it's hard to answer without seeing your data. But DBD::CSV is built to handle almost all varieties as long as you give it the right settings for the end-of-line character ('\012' if the files were created on *nix, '\015' if the files were created on mac, and '\015\012' if on windows or any other character that separates the lines. You may also need to set the separator character (usually a comma, but often tab or semicolon, or whatever), the delimiter character (usually double quotes) and the escape character (usually a second double-quoate or a backslash). Once you have those set for the file, you should be able to use all DBI methods and do all basic SQL operations.
See SQL::Statement::Syntax for a list of the supported SQL syntax, see DBD::CSV for how to set the end-of-line, etc. and see DBI for basic usage.
| [reply] |
In general, yes. But, given that different applications and people may have varying levels of strictness about their CSV-ish-ness, I would do some testing before I commit to automating in production or something like that. Even after that, you should try to think of some (all?) of the ways that the creator of the CSV file can screw up the format and try to detect them.
| [reply] |
| [reply] |