Purist theory often doesn't match reality. I have a story that illustrates that.
Way back when I was a C programmer we hired a well meaning QA head. One of her tasks was to bring in some code analysis tools. The engineering staff was tasked with evaluating them. We immediately set out on a realism test: Take a few pieces of heavily used common code that had few bugs in their source code revision tree and throw those at the tool. Then take some of our worst code -- mostly code we had outsourced -- that had pages and pages of revision history related to bug fixes and throw that at it. The bad code won. When we probed into why, a few things became obvious:
- The code analyzer liked run-on initialization code in the bad source that set things to hard coded values. It liked that code because there were few branches (no conditionals).
- It disliked the complexity of our common code. Most of that code was complex because we were hiding complex problems from the rest of the code. So there were a lot of branches, etc. to handle the complexity and present it to the rest of the code in a simpler manner. That code was factored much better than the bad code (smaller well defined functions, etc.) but the analyzer, when looking at it as a whole, seemed to disregard that.
- It disliked some of the code constructs we found resulted in fewer bugs, such as not necessarily having only one exit point if that made the code hard to maintain.
The QA head was not pleased, which we found even more amusing. She started to insinuate that we were wrong since the analyzer used "standard metrics" that "don't lie." Our argument was that history is even less of a liar. She quit not long afterward. Score one for engineering. 8-)
This was over 10 years ago so I'm guessing a lot of those tools may be better now. But I think it's still true that a lot of theory doesn't match reality. The best guage of whether a given style is good is to look back historically at different coding practices and styles and see how they hold up in the real world.