Difference between revisions of "Linux Containers"

From Funtoo
Jump to navigation Jump to search
 
(53 intermediate revisions by 8 users not shown)
Line 5: Line 5:
As of Linux kernel 3.1.5, LXC is usable for isolating your own private workloads from one another. It is not yet ready to isolate potentially malicious users from one another or the host system. For a more mature containers solution that is appropriate for hosting environments, see [[OpenVZ]].
As of Linux kernel 3.1.5, LXC is usable for isolating your own private workloads from one another. It is not yet ready to isolate potentially malicious users from one another or the host system. For a more mature containers solution that is appropriate for hosting environments, see [[OpenVZ]].


LXC containers don't yet have their own system uptime, and they see everything that's in the host's <tt>dmesg</tt> output, among other things. But in general, the technology works.
LXC containers don't yet have their own system uptime, and they see everything that's in the host's {{c|dmesg}} output, among other things. But in general, the technology works.
 
== Basic Info ==
 
 
* Linux Containers are based on:
** Kernel namespaces for resource isolation
** CGroups for resource limitation and accounting
 
{{Package|app-emulation/lxc}} is the userspace tool for Linux containers


== Control groups ==
== Control groups ==


* Control groups (cgroups) Implemented as VFS, since 2.6.24
* Control groups (cgroups) in kernel since 2.6.24
** Allows aggregation of tasks
** Allows aggregation of tasks and their children
* Subsystems (cpuset, memory, blkio,...)
** Subsystems (cpuset, memory, blkio,...)
** accounting, limitation, priorizatio,...
** accounting - to measure how much resources certain systems use
* No disk quota limitation ( -> image file, LVM, XFS, directory tree quota)
** resource limiting - groups can be set to not exceed a set memory limit
** prioritization - some groups may get a larger share of CPU
** control - freezing/unfreezing of cgroups, checkpointing and restarting
** No disk quota limitation ( -> image file, LVM, XFS, directory tree quota,...)


== Subsystems ==
== Subsystems ==
 
<br>
<console>
{{console|body=
cat /proc/cgroups  
###i## cat /proc/cgroups  
#subsys_name hierarchy num_cgroups enabled
subsys_name hierarchy num_cgroups enabled
cpuset
cpuset
cpu
cpu
Line 29: Line 41:
perf_event
perf_event
hugetlb
hugetlb
</console>
}}


#cpuset    -> limits tasks to specific CPU/CPUs
#cpuset    -> limits tasks to specific CPU/CPUs
Line 44: Line 56:


=== Install LXC kernel ===
=== Install LXC kernel ===
Any kernel beyond 3.1.5 will probably work. Personally I prefer the sys-kernel/gentoo-sources-3.4.9 as these have support for all the namespaces without sacrificing the xfs, FUSE or NFS support for example. These checks were introduced later starting from kernel 3.5, this could also mean that the user namespace is not working optimally.
Any kernel beyond 3.1.5 will probably work. Personally I prefer {{Package|sys-kernel/gentoo-sources}} as these have support for all the namespaces without sacrificing the xfs, FUSE or NFS support for example. These checks were introduced later starting from kernel 3.5, this could also mean that the user namespace is not working optimally.


* User namespace (EXPERIMENTAL) depends on EXPERIMENTAL and on UIDGID_CONVERTED
* User namespace (EXPERIMENTAL) depends on EXPERIMENTAL and on UIDGID_CONVERTED
Line 54: Line 66:


** As of 3.10.xx kernel, all of the above options are safe to use with User namespaces, except for XFS_FS, therefore with kernel >=3.10.xx, you should answer XFS_FS = n, if you want User namespaces support.
** As of 3.10.xx kernel, all of the above options are safe to use with User namespaces, except for XFS_FS, therefore with kernel >=3.10.xx, you should answer XFS_FS = n, if you want User namespaces support.
** in your kernel source directory, you should check init/Kconfig and find out what UIDGID_CONVERTED depends on


==== Kernel configuration ====
==== Kernel configuration ====
Line 85: Line 98:


Once you have lxc installed, you can then check your kernel config with:
Once you have lxc installed, you can then check your kernel config with:
<console>
{{console|body=
# ##i##CONFIG=/path/to/config /usr/sbin/lxc-checkconfig
# ##i##CONFIG=/path/to/config /usr/sbin/lxc-checkconfig
</console>
}}


=== Emerge lxc ===
=== Emerge lxc ===
<console>
{{console|body=
# ##i##emerge -av app-emulation/lxc
# ##i##emerge app-emulation/lxc
</console>
}}
 
=== Configure Networking For Container ===
=== Configure Networking For Container ===


Typically, one uses a bridge to allow containers to connect to the network. This is how to do it under Funtoo Linux:
Typically, one uses a bridge to allow containers to connect to the network. This is how to do it under Funtoo Linux:


# create a bridge using the Funtoo network configuration scripts. Name the bridge something like <tt>brwan</tt> (using <tt>/etc/init.d/netif.brwan</tt>). Configure your bridge to have an IP address.
# create a bridge using the Funtoo network configuration scripts. Name the bridge something like {{c|brwan}} (using {{c|/etc/init.d/netif.brwan}}). Configure your bridge to have an IP address.
# Make your physical interface, such as <tt>eth0</tt>, an interface with no IP address (use the Funtoo <tt>interface-noip</tt> template.)
# Make your physical interface, such as {{c|eth0}}, an interface with no IP address (use the Funtoo {{c|interface-noip}} template.)
# Make <tt>netif.eth0</tt> a slave of <tt>netif.brwan</tt> in <tt>/etc/conf.d/netif.brwan</tt>.
# Make {{c|netif.eth0}} a slave of {{c|netif.brwan}} in {{c|/etc/conf.d/netif.brwan}}.
# Enable your new bridged network and make sure it is functioning properly on the host.
# Enable your new bridged network and make sure it is functioning properly on the host.


Line 106: Line 120:
== Setting up a Funtoo Linux LXC Container ==
== Setting up a Funtoo Linux LXC Container ==


Here are the steps required to get Funtoo Linux running <i>inside</i> a container. The steps below show you how to set up a container using an existing Funtoo Linux OpenVZ template. It is now also possible to use [[Metro]] to build an lxc container tarball directly, which will save you manual configuration steps and will provide an <tt>/etc/fstab.lxc</tt> file that you can use for your host container config. See [[Metro Recipes]] for info on how to use Metro to generate an lxc container.
Here are the steps required to get Funtoo Linux running <i>inside</i> a container. The steps below show you how to set up a container using an existing Funtoo Linux OpenVZ template. It is now also possible to use [[Metro]] to build an lxc container tarball directly, which will save you manual configuration steps and will provide an {{c|/etc/fstab.lxc}} file that you can use for your host container config. See [[Metro Recipes]] for info on how to use Metro to generate an lxc container.


=== Create and Configure Container Filesystem ===
=== Create and Configure Container Filesystem ===


# Start with a Funtoo LXC template, and unpack it to a directory such as <tt>/var/lib/lxc/funtoo</tt>.
# Start with a Funtoo LXC template, and unpack it to a directory such as {{c|/lxc/funtoo0/rootfs/}}
# Create an empty <tt>/var/lib/lxc/funtoo/etc/fstab</tt> file.
# Ensure {{c|c1}} line is uncommented (enabled) and {{c|c2}} through {{c|c6}} lines are disabled in {{c|/lxc/funtoo0/rootfs/etc/inittab}}
# Ensure <tt>c1</tt> line is uncommented (enabled) and <tt>c2</tt> through <tt>c6</tt> lines are disabled in <tt>/var/lib/lxc/funtoo/etc/inittab</tt>.


That's almost all you need to get the container filesystem ready to start.
That's almost all you need to get the container filesystem ready to start.
Line 120: Line 133:
Create the following files:
Create the following files:


==== <tt>/etc/lxc/funtoo/config</tt> ====
==== {{c|/var/lib/lxc/funtoo0.config}} ====
 
{{fancynote|Daniel Robbins needs to update this config to be more in line with http://wiki.progress-linux.org/software/lxc/ -- this config appears to have nice, refined device node permissions and other goodies.}}


Read "man 5 lxc.conf" , to get more information about linux container configuration file.
Read "man 5 lxc.conf" , to get more information about linux container configuration file.
<pre>
<pre>
## Container
## Container
lxc.utsname                             = funtoo
lxc.utsname = funtoo0
lxc.rootfs                             = /var/lib/lxc/funtoo
lxc.rootfs = /lxc/funtoo0/rootfs/
lxc.arch                               = x86_64
lxc.arch = x86_64
lxc.tty                                = 6
lxc.console = /var/log/lxc/funtoo0.console  # uncomment if you want to log containers console
lxc.pts                                = 1024
#lxc.tty = 6  # if you plan to use container with physical terminals (eg F1..F6)
#lxc.console                            = /var/log/lxc/funtoo.console
lxc.tty = 0  # set to 0 if you dont plan to use the container with physical terminal, also comment out in your containers /etc/inittab  c1 to c6 respawns (e.g. c1:12345:respawn:/sbin/agetty 38400 tty1 linux)
lxc.pts = 1024


## Capabilities
## Capabilities (man 7 capabilities)
lxc.cap.drop                           = sys_admin sys_module mac_admin mac_override
lxc.cap.drop = sys_module mac_admin mac_override sys_time audit_control audit_write syslog sys_admin sys_rawio
# note: dropping capability sys_resource, causes failure to ssh into funtoo LXC container.


## Devices
## Devices
# Allow all devices
lxc.cgroup.devices.deny = a # Deny access to all devices
#lxc.cgroup.devices.allow              = a
 
# Deny all devices
lxc.cgroup.devices.deny                = a
# Allow to mknod all devices (but not using them)
# Allow to mknod all devices (but not using them)
lxc.cgroup.devices.allow               = c *:* m
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow               = b *:* m
lxc.cgroup.devices.allow = b *:* m
# /dev/console
 
lxc.cgroup.devices.allow               = c 5:1 rwm
lxc.cgroup.devices.allow = c 1:3 rwm # /dev/null
# /dev/fuse
lxc.cgroup.devices.allow = c 1:5 rwm # /dev/zero
lxc.cgroup.devices.allow               = c 10:229 rwm
lxc.cgroup.devices.allow = c 1:7 rwm # /dev/full
# /dev/null
lxc.cgroup.devices.allow = c 1:8 rwm # /dev/random
lxc.cgroup.devices.allow               = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:9 rwm # /dev/urandom
# /dev/ptmx
#lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0 # ttys not required if you have lxc.tty = 0
lxc.cgroup.devices.allow               = c 5:2 rwm
#lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1
# /dev/pts/*
#lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2
lxc.cgroup.devices.allow               = c 136:* rwm
#lxc.cgroup.devices.allow = c 4:3 rwm # /dev/tty3
# /dev/random
lxc.cgroup.devices.allow = c 5:0 rwm # /dev/tty
lxc.cgroup.devices.allow               = c 1:8 rwm
lxc.cgroup.devices.allow = c 5:1 rwm # /dev/console
# /dev/rtc
lxc.cgroup.devices.allow = c 5:2 rwm # /dev/ptmx
lxc.cgroup.devices.allow               = c 254:0 rwm
lxc.cgroup.devices.allow = c 10:229 rwm # /dev/fuse
# /dev/tty
lxc.cgroup.devices.allow = c 136:* rwm # /dev/pts/*
lxc.cgroup.devices.allow               = c 5:0 rwm
lxc.cgroup.devices.allow = c 254:0 rwm # /dev/rtc0
# /dev/urandom
 
lxc.cgroup.devices.allow               = c 1:9 rwm
## Limits
# /dev/zero
lxc.cgroup.cpu.shares = 1024
lxc.cgroup.devices.allow               = c 1:5 rwm
lxc.cgroup.cpuset.cpus = 0        # limits container to CPU0
lxc.cgroup.memory.limit_in_bytes = 1024M
lxc.cgroup.memory.memsw.limit_in_bytes = 2048M
lxc.cgroup.blkio.weight = 500      # requires cfq block scheduler


## Limits#
## Filesystems
lxc.cgroup.cpu.shares                  = 1024
lxc.mount.entry = proc proc proc nosuid,nodev,noexec  0 0
#lxc.cgroup.cpuset.cpus                = 0
lxc.mount.entry = sysfs sys sysfs nosuid,nodev,noexec,ro 0 0
#lxc.cgroup.memory.limit_in_bytes      = 256M
lxc.mount.entry = shm dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,mode=1777,size=256m,create=dir 0 0  # /dev/shm size should be less then half of your container memory limit
#lxc.cgroup.memory.memsw.limit_in_bytes = 1G
lxc.mount.entry = tmpfs run tmpfs nosuid,nodev,noexec,mode=0755,size=128m 0 0
lxc.mount.entry = tmpfs tmp tmpfs nosuid,nodev,noexec,mode=1777,size=128m 0 0


## Filesystem
##Example of having /var/tmp/portage as tmpfs in container
lxc.mount                              = /etc/lxc/funtoo/fstab
#lxc.mount.entry = tmpfs var/tmp/portage tmpfs defaults,size=8g,uid=250,gid=250,mode=0775 0 0
#lxc.mount.entry                         = proc /var/lib/lxc/example.org/rootfs/proc proc nodev,noexec,nosuid 0 0
##Example of bind mount
#lxc.mount.entry                        = sysfs /var/lib/lxc/example.org/rootfs/sys sysfs defaults,ro 0 0
#lxc.mount.entry = /srv/funtoo0 /lxc/funtoo0/rootfs/srv/funtoo0 none defaults,bind 0 0
#lxc.mount.entry                       = /srv/share/example.org /var/lib/example.org/rootfs/srv/example.org none defaults,bind 0 0


## Network
## Network configuration
lxc.network.type                       = veth
lxc.network.type = veth
lxc.network.flags                       = up
lxc.network.flags = up
lxc.network.hwaddr                     = #put your MAC address here, otherwise you will get a random one
lxc.network.link = br0
lxc.network.link                        = br0
lxc.network.ipv4 = 192.168.1.2/24
lxc.network.name                       = eth0
lxc.network.ipv4.gateway = 192.168.1.1
#lxc.network.veth.pair                  = veth-example
lxc.network.hwaddr = #put your LXC container MAC address here, otherwise you will get a random one
lxc.network.name = eth0
</pre>
</pre>


Read "man 7 capabilities" to get more information aboout Linux capabilities.
Read "man 7 capabilities" to get more information aboout Linux capabilities.


Above, use the following command to generate a random MAC for <tt>lxc.network.hwaddr</tt>:
Above, use the following command to generate a random MAC for {{c|lxc.network.hwaddr}}:


<pre>
{{console|body=
# openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/.$//'
###i## openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/.$//'
</pre>
}}


It is a very good idea to assign a static MAC address to your container using <tt>lxc.network.hwaddr</tt>. If you don't, LXC will auto-generate a new random MAC every time your container starts, which may confuse network equipment that expects MAC addresses to remain constant.
It is a very good idea to assign a static MAC address to your container using {{c|lxc.network.hwaddr}}. If you don't, LXC will auto-generate a new random MAC every time your container starts, which may confuse network equipment that expects MAC addresses to remain constant.


It might happen from case to case that you aren't able to start your LXC Container with the above generated MAC address so for all these who run into that problem here is a little script that connects your IP for the container with the MAC address. Just save the following code as <tt>/etc/lxc/hwaddr.sh</tt>, make it executable and run it like <tt>/etc/lxc/hwaddr.sh xxx.xxx.xxx.xxx</tt> where xxx.xxx.xxx.xxx represents your Container IP.
It might happen from case to case that you aren't able to start your LXC Container with the above generated MAC address so for all these who run into that problem here is a little script that connects your IP for the container with the MAC address. Just save the following code as {{c|/etc/lxc/hwaddr.sh}}, make it executable and run it like {{c|/etc/lxc/hwaddr.sh xxx.xxx.xxx.xxx}} where xxx.xxx.xxx.xxx represents your Container IP. <br>{{c|/etc/lxc/hwaddr.sh}}:


<pre>
<pre>
Line 206: Line 221:
</pre>
</pre>


==== <tt>/etc/lxc/funtoo/fstab</tt> ====
== LXC Networking ==
*veth - Virtual Ethernet (bridge)
*vlan - vlan interface (requires device able to do vlan tagging)
*macvlan (mac-address based virtual lan tagging) has 3 modes:
**private
**vepa (Virtual Ethernet Port Aggregator)
**bridge
*phys - dedicated host NIC
[https://blog.flameeyes.eu/2010/09/linux-containers-and-networking Linux Containers and Networking]


<pre>
Enable routing on the host:
none /lxc/funtoo/dev/pts devpts defaults 0 0
By default Linux workstations and servers have IPv4 forwarding disabled.
none /lxc/funtoo/proc proc defaults 0 0
{{console|body=
none /lxc/funtoo/sys sysfs defaults 0 0
###i## echo "1" > /proc/sys/net/ipv4/ip_forward
none /lxc/funtoo/dev/shm tmpfs nodev,nosuid,noexec,mode=1777,rw 0 0
###i## cat /proc/sys/net/ipv4/ip_forward
</pre>
# 1
}}


== Initializing and Starting the Container ==
== Initializing and Starting the Container ==
Line 219: Line 243:
You will probably need to set the root password for the container before you can log in. You can use chroot to do this quickly:
You will probably need to set the root password for the container before you can log in. You can use chroot to do this quickly:


<pre>
{{console|body=
# chroot /lxc/funtoo
###i## chroot /lxc/funtoo0/rootfs
(chroot) # passwd
(chroot) ###i## passwd
New password: XXXXXXXX
New password: XXXXXXXX
Retype new password: XXXXXXXX
Retype new password: XXXXXXXX
passwd: password updated successfully
passwd: password updated successfully
# exit
(chroot) ###i## exit
</pre>
}}


Now that the root password is set, run:
Now that the root password is set, run:


<pre>
{{console|body=
# lxc-start -n funtoo -d
###i## lxc-start -n funtoo0 -d
</pre>
}}


The <tt>-d</tt> option will cause it to run in the background.
The {{c|-d}} option will cause it to run in the background.


To attach to the console:
To attach to the console:


<pre>
{{console|body=
# lxc-console -n funtoo
###i## lxc-console -n funtoo0
</pre>
}}


You should now be able to log in and use the container. In addition, the container should now be accessible on the network.
You should now be able to log in and use the container. In addition, the container should now be accessible on the network.
To directly attach to container:
{{console|body=
###i## lxc-attach -n funtoo0
}}


To stop the container:
To stop the container:


<pre>
{{console|body=
# lxc-stop -n funtoo
###i## lxc-stop -n funtoo0
</pre>
}}


Ensure that networking is working from within the container while it is running, and you're good to go!
Ensure that networking is working from within the container while it is running, and you're good to go!
== Starting LXC container during host boot ==
# You need to create symlink in {{c|/etc/init.d/}} to {{c|/etc/init.d/lxc}} so that it reflects your container.
# {{c|ln -s /etc/init.d/lxc /etc/init.d/lxc.funtoo0}}
# now you can add {{c|lxc.funtoo0}} to default runlevel
# {{c|rc-update add lxc.funtoo0 default}}
{{console|body=
###i## rc
* Starting funtoo0 ...                  [ ok ]
}}


== LXC Bugs/Missing Features ==
== LXC Bugs/Missing Features ==


This section is devoted to documenting issues with the current implementation of LXC and its associated tools. We will be gradually expanding this section with detailed descriptions of problems, their status, and proposed solutions.
This section is devoted to documenting issues with the current implementation of LXC and its associated tools. We will be gradually expanding this section with detailed descriptions of problems, their status, and proposed solutions.
=== reboot ===
By default, lxc does not support rebooting a container from within. It will simply stop and the host will not know to start it.


=== PID namespaces ===
=== PID namespaces ===


Process ID namespaces are functional, but the container can still see the CPU utilization of the host via the system load (ie. in <tt>top</tt>).
Process ID namespaces are functional, but the container can still see the CPU utilization of the host via the system load (ie. in {{c|top}}).


=== /dev/pts newinstance ===
=== /dev/pts newinstance ===


* Some changes may be required to the host to properly implement "newinstance" <tt>/dev/pts</tt>. See [https://bugzilla.redhat.com/show_bug.cgi?id=501718 This Red Hat bug].
* Some changes may be required to the host to properly implement "newinstance" {{c|/dev/pts}}. See [https://bugzilla.redhat.com/show_bug.cgi?id=501718 This Red Hat bug].


=== lxc-create and lxc-destroy ===
=== lxc-create and lxc-destroy ===
Line 278: Line 315:
* Re-starting a container can result in a failure as network resource are tied up from the already-defunct instance: [http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg00824.html]
* Re-starting a container can result in a failure as network resource are tied up from the already-defunct instance: [http://www.mail-archive.com/lxc-devel@lists.sourceforge.net/msg00824.html]


=== lxc-halt ===
=== graceful shutdown ===


* Missing tool to graceful shutdown container. 'lxc-halt' should be written and be posix sh-compatible, using lxc-execute to run halt in container.
* To gracefully shutdown a container, it's init system needs to properly handle kill -PWR signal
* For funtoo/gentoo make sure that you have:
** pf:12345:powerwait:/sbin/halt
** in your containers /etc/inittab
* For debian/ubuntu make sure that you have:
** pf::powerwait:/sbin/shutdown -t1 -a -h now
** in your container /etc/inittab
** and also comment out other line starting with pf:powerfail (such as pf::powerwait:/etc/init.d/powerfail start) <- these are used if you have UPS monitoring daemon installed!


=== funtoo ===
=== funtoo ===


* Our udev should be updated to contain <tt>-lxc</tt> in scripts. (This has been done as of 02-Nov-2011, so should be resolved. But not fixed in our openvz templates, so need to regen them in a few days.)
* Our udev should be updated to contain {{c|-lxc}} in scripts. (This has been done as of 02-Nov-2011, so should be resolved. But not fixed in our openvz templates, so need to regen them in a few days.)
* Our openrc should be patched to handle the case where it cannot mount tmpfs, and gracefully handle this situation somehow. (Work-around in our docs above, which is to mount tmpfs to <tt>/libexec/rc/init.d</tt> using the container-specific <tt>fstab</tt> file (on the host.)
* Our openrc should be patched to handle the case where it cannot mount tmpfs, and gracefully handle this situation somehow. (Work-around in our docs above, which is to mount tmpfs to {{c|/libexec/rc/init.d}} using the container-specific {{c|fstab}} file (on the host.)
* Emerging udev within a container can/will fail when realdev is run, if a device node cannot be created (such as /dev/console) if there are no mknod capabilities within the container. This should be fixed.
* Emerging udev within a container can/will fail when realdev is run, if a device node cannot be created (such as /dev/console) if there are no mknod capabilities within the container. This should be fixed.


== References ==
== References ==


* <tt>man 7 capabilities</tt>
* {{c|man 7 capabilities}}
* <tt>man 5 lxc.conf</tt>
* {{c|man 5 lxc.conf}}


== Links ==
== Links ==


* [[LXC_Fun|Fun stuff with LXC]]
* [[LXD|Try LXD which brings more features to LXC]]
* There are a number of additional lxc features that can be enabled via patches: [http://lxc.sourceforge.net/patches/linux/3.0.0/3.0.0-lxc1/]
* There are a number of additional lxc features that can be enabled via patches: [http://lxc.sourceforge.net/patches/linux/3.0.0/3.0.0-lxc1/]
* [https://wiki.ubuntu.com/UserNamespace Ubuntu User Namespaces page]
* [https://wiki.ubuntu.com/UserNamespace Ubuntu User Namespaces page]
Line 309: Line 355:
[[Category:HOWTO]]
[[Category:HOWTO]]
[[Category:Virtualization]]
[[Category:Virtualization]]
[[Category:pt_BR]]

Latest revision as of 13:17, September 9, 2017

Linux Containers, or LXC, is a Linux feature that allows Linux to run one or more isolated virtual systems (with their own network interfaces, process namespace, user namespace, and power state) using a single Linux kernel on a single server.

Status

As of Linux kernel 3.1.5, LXC is usable for isolating your own private workloads from one another. It is not yet ready to isolate potentially malicious users from one another or the host system. For a more mature containers solution that is appropriate for hosting environments, see OpenVZ.

LXC containers don't yet have their own system uptime, and they see everything that's in the host's dmesg output, among other things. But in general, the technology works.

Basic Info

  • Linux Containers are based on:
    • Kernel namespaces for resource isolation
    • CGroups for resource limitation and accounting

app-emulation/lxc is the userspace tool for Linux containers

Control groups

  • Control groups (cgroups) in kernel since 2.6.24
    • Allows aggregation of tasks and their children
    • Subsystems (cpuset, memory, blkio,...)
    • accounting - to measure how much resources certain systems use
    • resource limiting - groups can be set to not exceed a set memory limit
    • prioritization - some groups may get a larger share of CPU
    • control - freezing/unfreezing of cgroups, checkpointing and restarting
    • No disk quota limitation ( -> image file, LVM, XFS, directory tree quota,...)

Subsystems


root # cat /proc/cgroups 
subsys_name	hierarchy	num_cgroups	enabled
cpuset	
cpu	
cpuacct	
memory	
devices	
freezer	
blkio	
perf_event
hugetlb
  1. cpuset -> limits tasks to specific CPU/CPUs
  2. cpu -> CPU shares
  3. cpuacct -> CPU accounting
  4. memory -> memory and swap limitation and accounting
  5. devices -> device allow deny list
  6. freezer -> suspend/resume tasks
  7. blkio -> I/O priorization (weight, throttle, ...)
  8. perf_event -> support for per-cpu per-cgroup monitoring perf_events
  9. hugetlb -> cgroup resource controller for HugeTLB pages hugetlb

Configuring the Funtoo Host System

Install LXC kernel

Any kernel beyond 3.1.5 will probably work. Personally I prefer No results as these have support for all the namespaces without sacrificing the xfs, FUSE or NFS support for example. These checks were introduced later starting from kernel 3.5, this could also mean that the user namespace is not working optimally.

  • User namespace (EXPERIMENTAL) depends on EXPERIMENTAL and on UIDGID_CONVERTED
    • config UIDGID_CONVERTED
      • True if all of the selected software components are known to have uid_t and gid_t converted to kuid_t and kgid_t where appropriate and are otherwise safe to use with the user namespace.
        • Networking - depends on NET_9P = n
        • Filesystems - 9P_FS = n, AFS_FS = n, AUTOFS4_FS = n, CEPH_FS = n, CIFS = n, CODA_FS = n, FUSE_FS = n, GFS2_FS = n, NCP_FS = n, NFSD = n, NFS_FS = n, OCFS2_FS = n, XFS_FS = n
        • Security options - Grsecurity - GRKERNSEC = n (if applicable)
    • As of 3.10.xx kernel, all of the above options are safe to use with User namespaces, except for XFS_FS, therefore with kernel >=3.10.xx, you should answer XFS_FS = n, if you want User namespaces support.
    • in your kernel source directory, you should check init/Kconfig and find out what UIDGID_CONVERTED depends on

Kernel configuration

These options should be enable in your kernel to be able to take full advantage of LXC.

  • General setup
    • CONFIG_NAMESPACES
      • CONFIG_UTS_NS
      • CONFIG_IPC_NS
      • CONFIG_PID_NS
      • CONFIG_NET_NS
      • CONFIG_USER_NS
    • CONFIG_CGROUPS
      • CONFIG_CGROUP_DEVICE
      • CONFIG_CGROUP_SCHED
      • CONFIG_CGROUP_CPUACCT
      • CONFIG_CGROUP_MEM_RES_CTLR (in 3.6+ kernels it's called CONFIG_MEMCG)
      • CONFIG_CGROUP_MEM_RES_CTLR_SWAP (in 3.6+ kernels it's called CONFIG_MEMCG_SWAP)
      • CONFIG_CPUSETS (on multiprocessor hosts)
  • Networking support
    • Networking options
      • CONFIG_VLAN_8021Q
  • Device Drivers
    • Character devices
      • Unix98 PTY support
        • CONFIG_DEVPTS_MULTIPLE_INSTANCES
    • Network device support
      • Network core driver support
        • CONFIG_VETH
        • CONFIG_MACVLAN

Once you have lxc installed, you can then check your kernel config with:

root # CONFIG=/path/to/config /usr/sbin/lxc-checkconfig

Emerge lxc

root # emerge app-emulation/lxc

Configure Networking For Container

Typically, one uses a bridge to allow containers to connect to the network. This is how to do it under Funtoo Linux:

  1. create a bridge using the Funtoo network configuration scripts. Name the bridge something like brwan (using /etc/init.d/netif.brwan). Configure your bridge to have an IP address.
  2. Make your physical interface, such as eth0, an interface with no IP address (use the Funtoo interface-noip template.)
  3. Make netif.eth0 a slave of netif.brwan in /etc/conf.d/netif.brwan.
  4. Enable your new bridged network and make sure it is functioning properly on the host.

You will now be able to configure LXC to automatically add your container's virtual ethernet interface to the bridge when it starts, which will connect it to your network.

Setting up a Funtoo Linux LXC Container

Here are the steps required to get Funtoo Linux running inside a container. The steps below show you how to set up a container using an existing Funtoo Linux OpenVZ template. It is now also possible to use Metro to build an lxc container tarball directly, which will save you manual configuration steps and will provide an /etc/fstab.lxc file that you can use for your host container config. See Metro Recipes for info on how to use Metro to generate an lxc container.

Create and Configure Container Filesystem

  1. Start with a Funtoo LXC template, and unpack it to a directory such as /lxc/funtoo0/rootfs/
  2. Ensure c1 line is uncommented (enabled) and c2 through c6 lines are disabled in /lxc/funtoo0/rootfs/etc/inittab

That's almost all you need to get the container filesystem ready to start.

Create Container Configuration Files

Create the following files:

/var/lib/lxc/funtoo0.config

Read "man 5 lxc.conf" , to get more information about linux container configuration file.

## Container
lxc.utsname = funtoo0
lxc.rootfs = /lxc/funtoo0/rootfs/
lxc.arch = x86_64
lxc.console = /var/log/lxc/funtoo0.console  # uncomment if you want to log containers console
#lxc.tty = 6  # if you plan to use container with physical terminals (eg F1..F6)
lxc.tty = 0  # set to 0 if you dont plan to use the container with physical terminal, also comment out in your containers /etc/inittab  c1 to c6 respawns (e.g. c1:12345:respawn:/sbin/agetty 38400 tty1 linux)
lxc.pts = 1024

## Capabilities (man 7 capabilities)
lxc.cap.drop = sys_module mac_admin mac_override sys_time audit_control audit_write syslog sys_admin sys_rawio
# note: dropping capability sys_resource, causes failure to ssh into funtoo LXC container.

## Devices
lxc.cgroup.devices.deny = a # Deny access to all devices

# Allow to mknod all devices (but not using them)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m

lxc.cgroup.devices.allow = c 1:3 rwm # /dev/null
lxc.cgroup.devices.allow = c 1:5 rwm # /dev/zero
lxc.cgroup.devices.allow = c 1:7 rwm # /dev/full
lxc.cgroup.devices.allow = c 1:8 rwm # /dev/random
lxc.cgroup.devices.allow = c 1:9 rwm # /dev/urandom
#lxc.cgroup.devices.allow = c 4:0 rwm # /dev/tty0 # ttys not required if you have lxc.tty = 0
#lxc.cgroup.devices.allow = c 4:1 rwm # /dev/tty1
#lxc.cgroup.devices.allow = c 4:2 rwm # /dev/tty2
#lxc.cgroup.devices.allow = c 4:3 rwm # /dev/tty3
lxc.cgroup.devices.allow = c 5:0 rwm # /dev/tty
lxc.cgroup.devices.allow = c 5:1 rwm # /dev/console
lxc.cgroup.devices.allow = c 5:2 rwm # /dev/ptmx
lxc.cgroup.devices.allow = c 10:229 rwm # /dev/fuse
lxc.cgroup.devices.allow = c 136:* rwm # /dev/pts/*
lxc.cgroup.devices.allow = c 254:0 rwm # /dev/rtc0

## Limits
lxc.cgroup.cpu.shares = 1024
lxc.cgroup.cpuset.cpus = 0        # limits container to CPU0
lxc.cgroup.memory.limit_in_bytes = 1024M
lxc.cgroup.memory.memsw.limit_in_bytes = 2048M
lxc.cgroup.blkio.weight = 500      # requires cfq block scheduler

## Filesystems
lxc.mount.entry = proc proc proc nosuid,nodev,noexec  0 0
lxc.mount.entry = sysfs sys sysfs nosuid,nodev,noexec,ro 0 0
lxc.mount.entry = shm dev/shm tmpfs rw,nosuid,nodev,noexec,relatime,mode=1777,size=256m,create=dir 0 0   # /dev/shm size should be less then half of your container memory limit
lxc.mount.entry = tmpfs run tmpfs nosuid,nodev,noexec,mode=0755,size=128m 0 0
lxc.mount.entry = tmpfs tmp tmpfs nosuid,nodev,noexec,mode=1777,size=128m 0 0

##Example of having /var/tmp/portage as tmpfs in container 
#lxc.mount.entry = tmpfs var/tmp/portage tmpfs defaults,size=8g,uid=250,gid=250,mode=0775 0 0
##Example of bind mount
#lxc.mount.entry = /srv/funtoo0 /lxc/funtoo0/rootfs/srv/funtoo0 none defaults,bind 0 0

## Network configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.ipv4 = 192.168.1.2/24
lxc.network.ipv4.gateway = 192.168.1.1
lxc.network.hwaddr = #put your LXC container MAC address here, otherwise you will get a random one
lxc.network.name = eth0

Read "man 7 capabilities" to get more information aboout Linux capabilities.

Above, use the following command to generate a random MAC for lxc.network.hwaddr:

root # openssl rand -hex 6

It is a very good idea to assign a static MAC address to your container using lxc.network.hwaddr. If you don't, LXC will auto-generate a new random MAC every time your container starts, which may confuse network equipment that expects MAC addresses to remain constant.

It might happen from case to case that you aren't able to start your LXC Container with the above generated MAC address so for all these who run into that problem here is a little script that connects your IP for the container with the MAC address. Just save the following code as /etc/lxc/hwaddr.sh, make it executable and run it like /etc/lxc/hwaddr.sh xxx.xxx.xxx.xxx where xxx.xxx.xxx.xxx represents your Container IP.
/etc/lxc/hwaddr.sh:

#!/bin/sh
IP=$*
HA=`printf "02:00:%x:%x:%x:%x" ${IP//./ }`
echo $HA

LXC Networking

  • veth - Virtual Ethernet (bridge)
  • vlan - vlan interface (requires device able to do vlan tagging)
  • macvlan (mac-address based virtual lan tagging) has 3 modes:
    • private
    • vepa (Virtual Ethernet Port Aggregator)
    • bridge
  • phys - dedicated host NIC

Linux Containers and Networking

Enable routing on the host: By default Linux workstations and servers have IPv4 forwarding disabled.

root # echo "1" > /proc/sys/net/ipv4/ip_forward
root # cat /proc/sys/net/ipv4/ip_forward
root # 1

Initializing and Starting the Container

You will probably need to set the root password for the container before you can log in. You can use chroot to do this quickly:

root # chroot /lxc/funtoo0/rootfs
(chroot) # passwd
New password: XXXXXXXX
Retype new password: XXXXXXXX
passwd: password updated successfully
(chroot) # exit

Now that the root password is set, run:

root # lxc-start -n funtoo0 -d

The -d option will cause it to run in the background.

To attach to the console:

root # lxc-console -n funtoo0

You should now be able to log in and use the container. In addition, the container should now be accessible on the network.

To directly attach to container:

root # lxc-attach -n funtoo0

To stop the container:

root # lxc-stop -n funtoo0

Ensure that networking is working from within the container while it is running, and you're good to go!

Starting LXC container during host boot

  1. You need to create symlink in /etc/init.d/ to /etc/init.d/lxc so that it reflects your container.
  2. ln -s /etc/init.d/lxc /etc/init.d/lxc.funtoo0
  3. now you can add lxc.funtoo0 to default runlevel
  4. rc-update add lxc.funtoo0 default
root # rc
 * Starting funtoo0 ...                  [ ok ]

LXC Bugs/Missing Features

This section is devoted to documenting issues with the current implementation of LXC and its associated tools. We will be gradually expanding this section with detailed descriptions of problems, their status, and proposed solutions.

PID namespaces

Process ID namespaces are functional, but the container can still see the CPU utilization of the host via the system load (ie. in top).

/dev/pts newinstance

  • Some changes may be required to the host to properly implement "newinstance" /dev/pts. See This Red Hat bug.

lxc-create and lxc-destroy

  • LXC's shell scripts are badly designed and are sure way to destruction, avoid using lxc-create and lxc-destroy.

network initialization and cleanup

  • Re-starting a container can result in a failure as network resource are tied up from the already-defunct instance: [1]

graceful shutdown

  • To gracefully shutdown a container, it's init system needs to properly handle kill -PWR signal
  • For funtoo/gentoo make sure that you have:
    • pf:12345:powerwait:/sbin/halt
    • in your containers /etc/inittab
  • For debian/ubuntu make sure that you have:
    • pf::powerwait:/sbin/shutdown -t1 -a -h now
    • in your container /etc/inittab
    • and also comment out other line starting with pf:powerfail (such as pf::powerwait:/etc/init.d/powerfail start) <- these are used if you have UPS monitoring daemon installed!

funtoo

  • Our udev should be updated to contain -lxc in scripts. (This has been done as of 02-Nov-2011, so should be resolved. But not fixed in our openvz templates, so need to regen them in a few days.)
  • Our openrc should be patched to handle the case where it cannot mount tmpfs, and gracefully handle this situation somehow. (Work-around in our docs above, which is to mount tmpfs to /libexec/rc/init.d using the container-specific fstab file (on the host.)
  • Emerging udev within a container can/will fail when realdev is run, if a device node cannot be created (such as /dev/console) if there are no mknod capabilities within the container. This should be fixed.

References

  • man 7 capabilities
  • man 5 lxc.conf

Links