This scheme is substantially more complex than just hitting the database. There are more files. If you ever need to reproduce the installation, it is not always obvious where to get them from. With things scattered around it will take longer for people to learn the system.
Plus you have all sorts of possible synchronization errors. For instance you change where the files are dumped, but somewhere you still have a reference to an old location that appears to work because the file is there...then a month later you delete that and have to figure out how it ever worked. Or you lose a cron and then find out a couple of months later what it was for. Or someone decides to refresh a file with a long-running daemon which doesn't survive a reboot 6 months later.
In short there are a lot of possible failure modes. Most of them won't happen to you, and they won't happen often. But why let them happen at all?
That said, this can work and there are good reasons to go with a design like that. Certainly if I needed to run a high-performance website, I would look to something like that to offload work to static content that can be readily distributed across a cluster. But I wouldn't choose to make my life more complex up front when I had a choice.
In reply to Re (tilly) 2: Technique Efficiancy
by tilly
in thread Technique Efficiancy
by Monolith-0
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |