The best advice is the kind you don't want to hear. :-)
I wanted honest advice hippo and I am grateful that you provided it 😊
Having a separate test server is not something I had even considered; now, it is firmly on the radar. But a server change isn't happening just yet, so in the short term, I am looking at a better way of dealing with global variables...unless, of course, the way I'm doing it already is as good as the alternatives.
| [reply] |
Well, nothing beats separate servers IMHO but that's not always an option. For example, we provide shared hosting for some customers and obviously they need to have their data isolated from each other while being on the same server. We achieve this with strictly-enforced permissions on the users' files. Each user only has read access to their own files, not those of any other user. You could set up the same, at least for the credential-filled file. Just ensure that the untrusted dev user has no permissions to access the live file (ideally the whole live tree) and that's all you need.
For configuration variables in our own sites (not customers) we tend to use environment variables declared within the webserver conf (which is itself unreadable by the normal users). This keeps the per-site filesystem clean and means that we don't need to take care with that when deploying in-site code between dev and prod. It's a bit more of a faff to do this for the customers and they have less need - most don't bother with a dev environment hosted with us.
HTH.
| [reply] |
/site/test/
so they don't have any access to the live branch.
My question was more about the methodology of defining the site-wide variables.
Is putting them in a module and declaring them with our a sensible way to do it? | [reply] [d/l] [select] |
The best advice is the kind you don't want to hear. :-)
Well...you might be surprised...
Don't host test and prod on the same server
We've been revising our infrastructure plan, and we've reached agreement on the next stage of growth. This will be a server for production and test environments plus a smaller server for dev. Test will be for making relatively small changes and releasing new features limited to our codebase whereas dev will be for testing system-wide changes such as database version, OS upgrades and Apache configuration.
It's not happening immediately as there is not the revenue or traffic to warrant it right now, but it is agreed as a forward plan.
Thanks hippo. Before this thread, the idea of a separate server hadn't entered our thought process.
| [reply] |
We've been revising our infrastructure plan, and we've reached agreement on the next stage of growth. This will be a server for production and test environments plus a smaller server for dev.
Consider using virtual machines for the development server, so you can easily create several test servers (one test server, one staging server, one victim server for each developer) in the future. There are several good solutions for running VMs that I know:
- VMware
- Never used for a long time. Several variants, some running on bare metal, some on top of Windows or Linux. Current owner is trying to squeeze out every cent of VMware users, no matter how much the brand is damaged by that behaviour.
- VirtualBox
- Runs on top of Windows, Linux, MacOS X, Solaris. Mostly GPL software, some nice features (IIRC, USB 3.0 and Remote Desktop) are free as in beer, but not GPL. Very nice on a desktop. Management by native application.
- Proxmox
- Runs on top of Debian Linux, comes with Debian Linux if you want, provides not only VMs, but also containers. Open source, based on many existing open source packages (LXC, qemu, novnc, Perl and tons of others). Can be clustered. Management via Webbrowser. Support costs, if you can live with just the wiki and the forums, it's free as in beer. Highly recommended for servers.
Real-world Proxmox:
- Home Setup
- 2x HP N54L (dual core AMD 2,2 GHz), each with 8 GByte RAM, software RAIDs on SATA Harddisks for root and data filesystems, running seven resp. three LXC containers.
- Old Office Server
- Core i5-6400 (4x 2,7 GHz) on a Gigabyte desktop board, 32 GByte RAM, root and some data on a RAID-5 of 3x 2 TB HDD, other data on a second RAID-5 of 3x 2 TB HDD, currently running seven of the 18 configured Linux VMs.
- New Office Server
- Ryzen 7 2700X (8x 3,7 GHz) on a Gigabyte desktop board, 64 GByte RAM, root and some data on a RAID-5 of 3x 2 TB SSD, other data on a second RAID-5 of 3x 2 TB SSD, currently running 10 of the 15 configured VMs, most of them run Windows (XP, 7 or 10), the other ones Linux.
Neither the home setup nor the office servers run in a cluster, as you need at least three servers for a cluster, and you should have a dedicated storage system. The home servers really need more RAM, but work good enough for two users. The two office machines serve about 15 users. Both setups run file and mail servers, SVN, databases. At home, Urbackup also runs in an LXC. At work, Urbackup runs on a separate, non-virtual server. At work, there are also several Jira instances, a Samba domain controller, and some test servers running in VMs.
Some lessons learned:
- Running one Software RAID-5/6 with a lot of disks (six) really sucks, as each write access is amplified by a factor of six, so that the machine is severely limited by disk I/O. I've changed that to two Software RAID-5 with three disks each. That significantly reduces the I/O load. The obvious disadvantage is that only one disk per RAID may fail and needs to be replaced ASAP. RAID-6 would degrade to a fully working RAID-5 if one disk fails.
- SMR harddisks are a real pain in the back. They are just garbage. Don't buy them. If they happen to work, their performance sucks. The new server has seen five of them failing and being replaced by new ones over the last five years.
- SATA SSDs (Samsung 870 EVO) replacing SMR disks are a huge performance gain. Highly recommended when used with a working backup.
- (The CMR harddisks in the old office server just work. Fast enough and with no fails.)
- Nothing beats RAM and CPU cores except for more RAM and more CPU cores. Buy as many cores and as much RAM as possible.
- Proxmox recommends a hardware RAID. Software RAID-1 with two disks and RAID-5 with three disks are just fine. There will be some I/O load peaks, but they rarely matter in our setup.
- You don't need server grade hardware to run servers 24/7. Desktop hardware in a 19 inch case with hot swap disk cages works just fine.
- Proper server hardware has remote management and redundant power supplies, both can be handy at times, but you can do without.
- RAID recovery after a power outage takes a day at full I/O load and makes the server unusable. If your machines serve more than just two home users, you want one dedicated UPS per server and one dedicated line breaker per UPS.
Alexander
--
Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)
| [reply] |