We disagree on what a good working understanding is. By that I don't mean enough to know how to optimize corner cases, identify bugs, or handle truly advanced concepts. By working understanding I mean enough to come up with sensible designs, sensible layouts, and construct working queries that mean what you think they mean.
After that if you need to optimize and don't know enough to do it, then you can bring in the expert. If you have a good database and have shown basic common sense, this is generally a fairly straightforward process.
That's true in general. But it is even more true with databases. Because a good DBA working with a semi-decent database doesn't even need to see your code to find the bad queries! How do they accomplish this miracle? Why quite simply, the database has profiling tools to tell the DBA where it is spending its time, and bad queries will show up as one of the top running statements. Now that the DBA is armed with the actual query that is misbehaving, it is a fairly simple matter for them to come to the application developers and say, "Find me where you are generating this query." A few invocations of grep later, and you're in a position to start analyzing it.
The Pareto principle tells you that you only need to do a little bit of this optimization to clear out the big offenders. Practice bears this theory out.
Incidentally I've found that doing this kind of optimization requires application programmers and DBAs to cooperate. DBAs and application programmers each bring something to the table. A DBA might notice, for instance, that a better index would help. Alternately the application programmer might add a caching layer to reduce how often they are querying the database. Optimization is an argument for cooperation, not an argument for making one side or the other responsible for the problem.
Now the next point is that you are big on having multiple applications accessing the same database. I pointed out one example of an application that can reasonably assume it is the only one accessing its database, namely RT. It can assume that because it installs a database as part of its installation. You dismiss that as "trivial". Well most programming is actually trivial. And there are a lot of programmers who are employed in producing "trivial" applications.
Don't believe me? Look at the popularity of MS Access as a development platform! Just because something is trivial doesn't mean that it is unimportant. (History note. Back in the 90's, O'Reilly had a frustrating problem. Everyone was producing bad CGI books and making a fortune. O'Reilly had all of the top Perl expertise, and nobody wanted to do a CGI book! O'Reilly knew it would sell well, but to all of the good Perl programmers, the web was "trivial". Take in parameters, spit out text. Yeah, as a programming exercise this is trivial, but it is still important...)
But your claim that only trivial applications can assume they are the only ones accessing the database is wrong in more ways than that. Because I only named RT as an example that you are likely to be familiar with. There are lots of other reasons why only the application will access the database.
Another example is a high performance website. A high performance website, by nature of what it does, stresses databases to their limits. In order for it not to fall over, you wind up using every trick you can to make things go smoothly. You said, DB design transcends applications for all but the most trivial. Well then, high performance websites are among the "most trivial" applications, because any serious mismatch between the application design and the database layout will cause a ton of extra load that will be a huge problem.
In fact not only do high performance websites routinely assume they are the only ones touching the database, as you scale them you often have to have multiple databases to keep up with the website. I don't think I'm giving away anything shocking to reveal that, say, eBay does not keep all of their transactional data in one database. Instead they have multiple parallel databases and requests are sent to the correct one. Furthermore each database is replicated multiple times, and some queries are directed to read-only copies while others are directed to the copies you can write to. They do this because there is simply no way with current hardware to scale to their kind of site. (If you scale further, a little company called Google found that databases couldn't do what they wanted to do. Period.)
So not only does that application assume it is the only thing that will touch the database, but the application stress is such that the database had to undergo a massive redesign in order to scale. And now nothing but that application can be expected to be able to figure out what data is where.
Incidentally I suspect (though admittedly I don't know for sure) that eBay does not have any single database which has all of the data from the company. Just keeping such a database up to date would be a nightmare.
Rent.com is smaller - a lot smaller. But we're big enough and growing fast enough that we are starting down the same path already.
Now our backgrounds are relevant here. Yes, in an ideal world arguments should just stand on their own. But sometimes whether or not an argument makes sense depends on whether you have practical experience in a case where it doesn't. I happen to have that experience. I strongly suspect that you don't. Which is why in addition to pointing out why your assumptions may break in a particular environment, I also pointed out that my experience and role in that environment gives me a lot of practical experience to speak from. You can choose to disregard that experience, but I think it is relevant to the point at hand.
Incidentally this is not the only environment I know of where applications can legitimately assume they are the only ones accessing data. There are lots of others that I know of, but I just don't have direct experience with them. To name just one, many types of data processing businesses regularly manipulate large amounts of customer data. By contract they need to guarantee that this customer data doesn't go anywhere the customer didn't ask for it to go. This legal requirement is much easier to meet if they make sure that that data stays on its own databases in a locked down environment where only the custom application for that customer can touch it...
And a final note. My comment about your cheap shot is based on this paragraph: No web programmer worth his salt would advocate the mixing of Perl and HTML. Why advocate the mixing of Perl and SQL. You make it sound as if I was advocating the mixing of Perl and SQL. That I wasn't doing, and it seemed to me to be a cheap shot to dismiss what I had to say in such a way. Upon reflection I realized that I should have given you the benefit of the doubt - you might not have understood that I wasn't advocating that. But hopefully if you didn't then, you do now.