The first and simplest is that having a batch process update pages is less maintainable. Someone can accidentally remove or break the process, not knowing what it does, and 6 weeks later you find out that there is a problem with the page, but have no clue how it was supposed to work, and there is no obvious trail back to how it was supposed to be generated. As the mantra goes, premature optimization is usually wrong.
A closely related second issue is the ease of testing and storage. It is inherently a faster and more accurate development cycle to edit a script and see feedback than it is to edit a batch process, have to run it, and then get feedback. Much experience indicates that developing with rapid feedback leads to faster and better development than developing without it. (All else being equal of course.)
Third we have the issue of amount of storage. Often to replace a single dynamic page you have to create multiple static pages. This can quickly result in a combinatorial explosion of pages, eventually leading to problems with storage that make it much saner just to have a dynamic page in the first place.
And closely related to the ease of storage is the ease of adding further customizations. If you have a dynamic page and want to add another customization, it is easily done, just add it. If you have a batch process you have to consider whether or not to rewrite it to be a dynamic script to avoid the pain, or whether you will add more pages.
In short there are a lot of issues. But there are partial solutions out there. And several of the more advanced templating systems (eg Template::Toolkit and HTML::Mason) have put a fair amount of work into figuring out how to reasonably trade off static caching, dynamic caching, and on the fly pages.
In reply to Re (tilly) 2: Dynamic Web Content- Efficiency
by tilly
in thread Dynamic Web Content- Efficiency
by Superlman
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |