in reply to Re: Holding site variables
in thread Holding site variables

The best advice is the kind you don't want to hear. :-)

Well...you might be surprised...

Don't host test and prod on the same server

We've been revising our infrastructure plan, and we've reached agreement on the next stage of growth. This will be a server for production and test environments plus a smaller server for dev. Test will be for making relatively small changes and releasing new features limited to our codebase whereas dev will be for testing system-wide changes such as database version, OS upgrades and Apache configuration.

It's not happening immediately as there is not the revenue or traffic to warrant it right now, but it is agreed as a forward plan.

Thanks hippo. Before this thread, the idea of a separate server hadn't entered our thought process.

  • Comment on An update (was: Re^2: Holding site variables)

Replies are listed 'Best First'.
Re: An update (was: Re^2: Holding site variables)
by afoken (Chancellor) on Apr 04, 2024 at 22:18 UTC
    We've been revising our infrastructure plan, and we've reached agreement on the next stage of growth. This will be a server for production and test environments plus a smaller server for dev.

    Consider using virtual machines for the development server, so you can easily create several test servers (one test server, one staging server, one victim server for each developer) in the future. There are several good solutions for running VMs that I know:

    VMware
    Never used for a long time. Several variants, some running on bare metal, some on top of Windows or Linux. Current owner is trying to squeeze out every cent of VMware users, no matter how much the brand is damaged by that behaviour.
    VirtualBox
    Runs on top of Windows, Linux, MacOS X, Solaris. Mostly GPL software, some nice features (IIRC, USB 3.0 and Remote Desktop) are free as in beer, but not GPL. Very nice on a desktop. Management by native application.
    Proxmox
    Runs on top of Debian Linux, comes with Debian Linux if you want, provides not only VMs, but also containers. Open source, based on many existing open source packages (LXC, qemu, novnc, Perl and tons of others). Can be clustered. Management via Webbrowser. Support costs, if you can live with just the wiki and the forums, it's free as in beer. Highly recommended for servers.

    Real-world Proxmox:

    Home Setup
    2x HP N54L (dual core AMD 2,2 GHz), each with 8 GByte RAM, software RAIDs on SATA Harddisks for root and data filesystems, running seven resp. three LXC containers.
    Old Office Server
    Core i5-6400 (4x 2,7 GHz) on a Gigabyte desktop board, 32 GByte RAM, root and some data on a RAID-5 of 3x 2 TB HDD, other data on a second RAID-5 of 3x 2 TB HDD, currently running seven of the 18 configured Linux VMs.
    New Office Server
    Ryzen 7 2700X (8x 3,7 GHz) on a Gigabyte desktop board, 64 GByte RAM, root and some data on a RAID-5 of 3x 2 TB SSD, other data on a second RAID-5 of 3x 2 TB SSD, currently running 10 of the 15 configured VMs, most of them run Windows (XP, 7 or 10), the other ones Linux.

    Neither the home setup nor the office servers run in a cluster, as you need at least three servers for a cluster, and you should have a dedicated storage system. The home servers really need more RAM, but work good enough for two users. The two office machines serve about 15 users. Both setups run file and mail servers, SVN, databases. At home, Urbackup also runs in an LXC. At work, Urbackup runs on a separate, non-virtual server. At work, there are also several Jira instances, a Samba domain controller, and some test servers running in VMs.

    Some lessons learned:

    • Running one Software RAID-5/6 with a lot of disks (six) really sucks, as each write access is amplified by a factor of six, so that the machine is severely limited by disk I/O. I've changed that to two Software RAID-5 with three disks each. That significantly reduces the I/O load. The obvious disadvantage is that only one disk per RAID may fail and needs to be replaced ASAP. RAID-6 would degrade to a fully working RAID-5 if one disk fails.
    • SMR harddisks are a real pain in the back. They are just garbage. Don't buy them. If they happen to work, their performance sucks. The new server has seen five of them failing and being replaced by new ones over the last five years.
    • SATA SSDs (Samsung 870 EVO) replacing SMR disks are a huge performance gain. Highly recommended when used with a working backup.
    • (The CMR harddisks in the old office server just work. Fast enough and with no fails.)
    • Nothing beats RAM and CPU cores except for more RAM and more CPU cores. Buy as many cores and as much RAM as possible.
    • Proxmox recommends a hardware RAID. Software RAID-1 with two disks and RAID-5 with three disks are just fine. There will be some I/O load peaks, but they rarely matter in our setup.
    • You don't need server grade hardware to run servers 24/7. Desktop hardware in a 19 inch case with hot swap disk cages works just fine.
    • Proper server hardware has remote management and redundant power supplies, both can be handy at times, but you can do without.
    • RAID recovery after a power outage takes a day at full I/O load and makes the server unusable. If your machines serve more than just two home users, you want one dedicated UPS per server and one dedicated line breaker per UPS.

    Alexander

    --
    Today I will gladly share my knowledge and experience, for there are no sweeter words than "I told you so". ;-)