Funtoo Compute Initiative
For many years, the Funtoo project has been using Funtoo Linux for its entire infrastructure. A few years ago, we began to allow Funtoo Linux users to use our OpenVZ-based infrastructure for hosting, development and other projects, which you can learn more about at Funtoo Hosting.
The Funtoo Compute Initiative is an effort to document how Funtoo sets up servers and its container infrastructure, including everything from ordering of bare metal, to deployment, operation and maintenance. In short, it's our effort to share all our tricks with you, so you can use Funtoo Linux to quickly and inexpensively deploy very powerful hosting and container-based compute solutions.
The platform that we will deploy uses Funtoo Linux in conjunction with OpenVZ, which allows the creation of hundreds of native-speed containers that run Funtoo Linux or other Linux-based operating systems. Containers are like virtual machines, except that they run at native speed and there is a single kernel per server, rather than a kernel for each virtual machine. Containers can run any operating system -- as long as it's Linux. In addition, OpenVZ containers are now capable of running Docker.
The hardware that the Funtoo Project has used for its last two server deployments are documented below:
|HP Proliant DL 160 Generation 6 (G6), with 48GB RAM and two 6-core Intel Xeon x5650 processors||1U HP server, Intel Westmere CPUs, 24 CPU threads total.||$750 USD (used, off-lease, eBay)||HP DL 360 G7|
|Crucial MX200 1TB SSD||Root filesystem and container storage||Approx $330 USD||Consider a 256GB SSD for boot, root, swap and a second 1TB SSD for dedicated OpenVZ container use|
The above hardware allows you to build a 1U, 12-core, 24-thread, 48GB compute platform with 1TB of SSD storage for right around $1100.
Once receiving the off-lease server, it's recommended that you remove the CPU heat sinks, clean them and the CPU contact surface with alcohol cleaning pads, and re-apply high-quality thermal grease. Based on my experience, the OEM thermal grease on off-lease servers is often in need of re-application and this will help keep core temperatures well within a safe range when the server is deployed.
Hardware Deployment and Initial Setup
- I typically allocate 1GB for the /boot filesystem
- It's a good idea to have 24-48GB for swap, for emergencies
- Use ext4 for the root filesystem. OpenVZ is optimized for and tested on ext4. Don't use any other filesystem for container-related applications.
- Rather than using debian-sources, use the
openvz-rhel6-stablekernel with the
binaryUSE flag set.
sys-cluster/vzctland add the
vzservice to the default runlevel (this is covered below.)
net.eth0will be configured using Funtoo Networking as
interface-noip, and will be connected to a WAN switch
net.eth0as slave, and will be configured with a routable IPv4 address.
net.eth1will be configured using Funtoo Networking as
interface-noip, and will be connected to a fast private LAN switch.
net.eth1as slave, and will be configured with a non-routable static IPv4 address.
The network and initial server configuration will be covered in more detail below.
The following ebuilds are recommended as part of a Funtoo server deployment. First, sys-apps/haveged is recommended. For what purpose? Well, the Linux kernel maintains its own internal entropy (randomness) source, which is actually an essential component for encryption. This entropy source is kept viable by injecting it with a lot random timing information from user input -- but on a headless server, this entropy injection doesn't happen nearly as much as it needs to. In addition, we are going to potentially be running hundreds of OpenSSH daemons and other entropy-hungry apps. The solution is to run haveged] which will boost the available entropy on our headless server:
root # emerge -av sys-apps/haveged root # rc-update add haveged default
Mcelog is essential for detecting ECC memory failure conditions. Any such conditions will be logged to
root # emerge -av app-admin/mcelog root # rc-update add mcelog default
Smartmontools should be configured to monitor for pre-emptive disk failure for all your disks:
Ensure lines similar to the following appear in your
# -M test also ensures that a test alert email is sent when smartd is started or restarted, in addition to regular monitoring DEVICESCAN -M test -m firstname.lastname@example.org # Remember to put a valid email address, above ^^