But now, do lots of people really use this stuff to a moderate degree of complexity in the real-world, in the production?I certainly don't. For that matter, I don't usually embed SQL in my code. I prefer to look at a database as a separate service with its own APIs, whether they are implemented as perl with DBI or C with PL/SQL. I feel the data tier should be separately testable and that changes to the database structure should not effect the applications that use it.
By moderate complexity I mean a set of at least 100 or so tables, queries which is up to a screen in length, 5+ joins, and the like.I feel that if you've got 100 tables, you're likely suffering from bad design. If the schema is not digestible by a single human at a glance, then it's a likely candidate for being broken into multiple co-operative systems.
Similarly, if you're frequently joining over 5 tables, it's probably because you've over-normalized. Normalization is important, but not at the expense of performance and understandability.
There are exceptions: Sequencing over archive tables is a good use of many joins/unions. Joining over many static lookup tables (int -> string) is probably not a good idea - likely you want to fetch and cache the lookup tables to hashes on the client; it's cheaper to send INTs to the client for the main queries.
I have no problem with complicated queries, but I often find that they're mostly unnecessary and/or performance killers. If you treat the database as an API you can layer a minimal amount of code around the database to implement the API. Treated as a separate entity that API is free to implement appropriate caching and replication semantics where they're most useful.
-David
In reply to Re^3: LINQ/Ambition for Perl?
by erroneousBollock
in thread LINQ/Ambition for Perl?
by Anonymous Monk
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |