in reply to Re^2: DBD::CSV and really bad legacy flat file
in thread DBD::CSV and really bad legacy flat file
To be honest, I've done these conversions a few times. Including converting from a human-typed table (which was autoconverted into HTML via Lotus Domino) to an RDBMS. I did my development, against direct orders, while the table was still being updated. All I did was write the tool to convert, and then develop everything around that "sample" data, and then, once the switch was made, the "original" was considered frozen, I redid the conversion (took 10 or 15 minutes), and then put my database live.
So, the question is, will this legacy flat file continue to live, or is it eventually going to be replaced with this something new?
If it is going to live, and you're going to need to continue to read directly from it, you may be able to subclass DBD::File somehow to fake this - it may not be as fast as working on converted data, but it may still be faster than converting all the data, only to work with a subset of it. Or, at the least, it means you'll only have a single source for data, rather than working from an "unofficial" data source.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re^4: DBD::CSV and really bad legacy flat file
by harleypig (Monk) on Jul 19, 2005 at 19:17 UTC |