I've got some data that are supplied to me in flat file format. I'd like to use them to populate several related fields in a database.
The data are roughly of the form:
TopicA/TopicB Title1 Description1 KeywordA/KeywordB/KeywordC TopicB Title2 Description2 KeywordB/KeywordD TopicA/TopicC Title3 Description3 KeywordA/KeywordC
Each item has a title and a description. Each item is associated with one or more topics (separated by a '/') and one or more keywords (also separated by a '/'). Fields are tab separated.
I'm using a Microsoft SQL Server 7 database consisting of tables representing:
I've written a quick script that parses the flat file contents to produce a Perl data structure using Text::CSV_XS to split the tab-delimited data into fields, then calling perlfunc:split with the '/' delimeter to create an array of topics and keywords.
To populate the database, I could use a series of SQL statements with DBI but I'd have to lookup ID fields for the resources after creating them. I'm wondering if there's a more elegant, succinct way to do this, as it's likely I'll have to do this in the future with different data structures.
DBIx::Renderer looks like it might help me, but at present the only database supported is PostgreSQL. Alzabo and Tangram both seem to be overkill for a small problem like this - my first impression is that they are more powerful and complex than what I want.
My question is this: Have you solved a similar problem and how did you go about solving it? Also, if there's anything I've missed or if any of my assumptions above seem wrong, let me know.
In reply to Populating a complex SQL database from a flat file by tomhukins
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |