Note

The Funtoo Linux project has transitioned to "Hobby Mode" and this wiki is now read-only.

Difference between revisions of "LXD"

From Funtoo
Jump to navigation Jump to search
m (reduce subuid/subgid max by order of magnitude --- larger value didn't work for me with incus)
 
(116 intermediate revisions by 12 users not shown)
Line 1: Line 1:
LXD is a container "hypervisor" it should provide user with a new and fresh experience using  [[LXC]] technology.
<languages/>
{{Subpages|Laptop Network Setup,GPU Acceleration,GPU Acceleration (NVIDIA),What are subuids and subgids?,Administration Tutorial,Features and Concepts,Container Migration,Storage Pools}}
<translate>
== Introduction == <!--T:1-->


LXD consists of three components:
<!--T:122-->
{{Important|Please note that if you plan to use LXD on a laptop, you are likely using WiFi and NetworkManager, and the steps below will ''not'' work for you bridge setup. Please see [[LXD/Laptop Network Setup]] for important differences to allow you to use LXD in 'dev mode' for local use of containers for development.}}
* A system-wide daemon (lxd)
* A command line client (lxc)
* An OpenStack Nova plugin (nova-compute-lxd)


A REST API that is accesible both locally and if enabled, over the network is provided from the lxd daemon.
<!--T:2-->
LXD is a container "hypervisor" designed to provide an easy set of tools to manage Linux containers, and its development is currently being led by employees at Canonical. You can learn more about the project in general at https://linuxcontainers.org/lxd/.  


The command line tool is designed to be a very simple, yet very powerful tool to manage all your containers. It can handle connections to multiple container hosts and easily give you an overview of all the containers on your network, let you create some more where you want them and even move them around while they're running.
<!--T:3-->
LXD is currently used for container infrastructure for [[Special:MyLanguage/Funtoo Containers|Funtoo Containers]] and is also very well-supported under Funtoo Linux. For this reason, it's recommended that you check out LXD and see what it can do for you.


The OpenStack plugin then allows you to use your lxd hosts as compute nodes, running workloads on containers rather than virtual machines.
== Basic Setup on Funtoo == <!--T:4-->


The LXD project was founded and is currently led by Canonical Ltd and Ubuntu with contributions from a range of other companies and individual contributors.
<!--T:5-->
The following steps will show you how to set up a basic LXD environment under Funtoo Linux. This environment will essentially use the default LXD setup -- a will be created called {{c|lxdbr0}} which will use NAT to provide Internet access to your containers. In addition, a default storage pool will be created that will simply use your existing filesystem's storage, creating a directory at {{f|/var/lib/lxd/storage-pools/default}} to store any containers you create. More sophisticated configurations are possible that use dedicated network bridges connected to physical interfaces without NAT, as well as dedicated storage pools that use [[Special:MyLanguage/ZFS|ZFS]] and [[Special:MyLanguage/btrfs|btrfs]] -- however, these types of configurations are generally overkill for a developer workstation and should only be attempted by advanced users. So we won't cover them here.


__TOC__
=== Requirements === <!--T:6-->


== Features ==
<!--T:123-->
Some of the biggest features of LXD are:
This section will guide you through setting up the basic requirements for creating an LXD environment.


* Secure by design (unprivileged containers, resource restrictions and much more)
<!--T:8-->
* Scalable (from containers on your laptop to thousand of compute nodes)
The first step is to emerge LXD and its dependencies. Perform the following:
* Intuitive (simple, clear API and crisp command line experience)
* Image based (no more distribution templates, only good, trusted images)
* Live migration


=== Unprivileged Containers ===
</translate>
LXD uses unprivileged containers by default. The difference between an unprivileged container and a privileged one is whether the root user in the container is the “real” root user (uid 0 at the kernel level).
{{console|body=
 
# ##i##emerge -a lxd
The way unprivileged containers are created is by taking a set of normal UIDs and GIDs from the host, usually at least 65536 of each (to be POSIX compliant) and mapping those into the container.
 
The most common example and what most LXD users will end up with by default is a map of 65536 UIDs and GIDs, with a host base id of 100000. This means that root in the container (uid 0) will be mapped to the host uid 100000 and uid 65535 in the container will be mapped to uid 165535 on the host. UID/GID 65536 and higher in the container aren’t mapped and will return an error if you attempt to use them.
 
From a security point of view, that means that anything which is not owned by the users and groups mapped into the container will be inaccessible. Any such resource will show up as being owned by uid/gid “-1” (rendered as 65534 or nobody/nogroup in userspace). It also means that should there be a way to escape the container, even root in the container would find itself with just as much privileges on the host as a nobody user.
 
LXD does offer a number of options related to unprivileged configuration:
 
* Increasing the size of the default uid/gid map
* Setting up per-container maps
* Punching holes into the map to expose host users and groups
 
== Relationship with LXC ==
LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding
to create and manage the containers.
 
It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
 
== Licensing ==
LXD is free software and is developed under the Apache 2 license.
 
== Installing LXD in Funtoo ==
=== Kernel pre-requisities ===
These options should be '''disabled''' in your kernel to use all of the functions of LXD:
<code>
  GRKERNSEC_CHROOT_CAPS
  GRKERNSEC_CHROOT_CHMOD
  GRKERNSEC_CHROOT_DOUBLE
  GRKERNSEC_CHROOT_MOUNT
  GRKERNSEC_CHROOT_PIVOT
  GRKERNSEC_PROC
  GRKERNSEC_SYSFS_RESTRICT
  NETPRIO_CGROUP
</code>
These options should be '''enabled''' in your kernel to use all of the functions of LXD:
<code>
  BRIDGE
  CGROUP_CPUACCT
  CGROUP_DEVICE
  CGROUP_FREEZER
  CGROUP_SCHED
  CGROUPS
  CHECKPOINT_RESTORE
  CPUSETS
  DUMMY
  EPOLL
  EVENTFD
  FHANDLE
  IA32_EMULATION
  INET_DIAG
  INET_TCP_DIAG
  INET_UDP_DIAG
  INOTIFY_USER
  IP_NF_NAT
  IP_NF_TARGET_MASQUERADE
  IP6_NF_NAT
  IP6_NF_TARGET_MASQUERADE
  IPC_NS
  IPV6
  MACVLAN
  NAMESPACES
  NET_IPGRE
  NET_IPGRE_DEMUX
  NET_IPIP
  NET_NS
  NETFILTER_XT_MATCH_COMMENT
  NETLINK_DIAG
  NF_NAT_MASQUERADE_IPV4
  NF_NAT_MASQUERADE_IPV6
  PACKET_DIAG
  PID_NS
  POSIX_MQUEUE
  UNIX_DIAG
  USER_NS
  UTS_NS
  VETH
  VXLAN
</code>
{{note|The Funtoo's default kernel ({{Package|sys-kernel/debian-sources}} – v. 4.11.11 at the time of writing) has all these options enabled.}}
 
{{tip|On older kernels <code>DEVPTS_MULTIPLE_INSTANCES</code> is needed too (as of kernel version 4.11.11 - the option doesn't exist any more)}}
 
You can use this code to compare your config settings with the ones needed. Put the required config options in a kernel-req.txt file and run the script.
 
{{file|name=kerncheck.py|lang=python|desc=check kernel options|body=
import gzip
 
REQF = "kernel-req.txt"    # copy kernel options requirements into this file
REQS = set()
CFGS = set()
 
with open(REQF) as f:
    for line in f:
        REQS.add("CONFIG_%s" % line.strip())
 
with gzip.open("/proc/config.gz") as f:
    for line in f:
        line = line.decode().strip()
        if not line or line.startswith("#"):
            continue
 
        try:
            [opt, val] = line.split("=")
            if val =="n":
                continue
            CFGS.add(opt)
        except:
            pass
 
print("Enabled config options:")
print(CFGS & REQS)
 
print("Missing config options:")
print(REQS - CFGS)
}}
}}
<translate>


=== Getting LXD ===
<!--T:9-->
Installing LXD is pretty straight forward as the ebuild exists in our portage tree. I would recommend putting /var on btrfs or zfs (or at least /var/lib/lxd) as LXD can take advantage of these COW filesytems. LXD doesn’t need any configuration to use btrfs, you just need to make sure that /var/lib/lxd is stored on a btrfs filesystem and LXD will automatically make use of it for you. You can use any other filesystem, but be advised LXD can take great advantage of btrfs or ZFS, be it for snapshots, clones, quotas and more. If you want to test it on your current filesystem consider creating a loop device that you format with btrfs and use that as your /var/lib/lxd device.
Once LXD is done emerging, we will want to enable it to start by default:


Install LXD by:
</translate>
{{console|body=
{{console|body=
###i## emerge -av lxd
# ##i##rc-update add lxd default
}}
<translate>


These are the packages that would be merged, in order:
<!--T:10-->
In addition, we will want to set up the following files. {{f|/etc/security/limits.conf}} should be modified to have the following lines in it:


Calculating dependencies... done!
</translate>
[ebuild  N    ] dev-lang/go-1.8-r1:0/1.8::gentoo  USE="-gccgo" 69,062 KiB
{{file|name=/etc/security/limits.conf|body=
[ebuild  N    ] dev-go/go-crypto-0_pre20160126:0/0_pre20160126::gentoo  881 KiB
*      soft    nofile 1048576
[ebuild  N    ] sys-fs/squashfs-tools-4.3-r2::gentoo  USE="xattr xz -debug -lz4 -lzma -lzo -static" 194 KiB
*      hard    nofile 1048576
[ebuild  N    ] sys-libs/libseccomp-2.3.2::gentoo  USE="-static-libs" 547 KiB
root    soft    nofile 1048576
[ebuild  N    ] net-libs/libnet-1.2_rc3-r1:1.1::gentoo  USE="-doc -static-libs" 661 KiB
root    hard    nofile 1048576
[ebuild  N    ] dev-libs/libnl-3.3.0_rc1:3::gentoo  USE="python -static-libs -utils" PYTHON_TARGETS="python2_7 python3_4 -python3_5" 912 KiB
*      soft    memlock unlimited
[ebuild  N    ] dev-python/ipaddr-2.1.11-r1::gentoo  PYTHON_TARGETS="python2_7 python3_4 -pypy -python3_5" 29 KiB
*      hard    memlock unlimited
[ebuild  N    ] dev-go/go-text-0_pre20160211:0/0_pre20160211::gentoo  3,922 KiB
# End of file
[ebuild  N    ] sys-libs/libcap-2.25::gentoo  USE="pam -static-libs" 63 KiB
[ebuild  N    ] dev-go/go-net-0_pre20160216:0/0_pre20160216::gentoo  724 KiB
[ebuild  N    ] net-dns/dnsmasq-2.76-r1::gentoo USE="dhcp inotify ipv6 nls -auth-dns -conntrack -dbus -dhcp-tools -dnssec -idn -lua -script (-selinux) -static -tftp" LINGUAS="-de -es -fi -fr -id -it -no -pl -pt_BR -ro" 470 KiB
[ebuild  N    ] dev-libs/protobuf-c-1.2.1-r1:0/1.0.0::gentoo USE="-static-libs {-test}" 448 KiB
[ebuild  N    ] sys-process/criu-2.12::gentoo USE="python -setproctitle" PYTHON_TARGETS="python2_7" 632 KiB
[ebuild  N    ] app-emulation/lxc-2.0.7::gentoo USE="python seccomp cgmanager -doc -examples -lua" PYTHON_TARGETS="python3_4 -python3_5" 774 KiB
[ebuild  N    ] app-emulation/lxd-2.11::gentoo  USE="daemon nls {-test}" LINGUAS="-de -el -fr -ja -nl -ru" 2,352 KiB
 
Total: 15 packages (15 new), Size of downloads: 81,664 KiB
 
Would you like to add these changes to your config files? [Yes/No]
}}
}}
<translate>


=== Running LXD ===
<!--T:11-->
Once installed you need to start the LXD daemon. By running:
Next, we come to the concept of "subuid" and "subgid". Typically, a user will get one user id and one group id. Subids and subgids allow us to assign additional UIDs and GIDs to a user for their own uses. Per the documentation:
 
{{console|body=
###i## service lxd start
* Starting lxd server ...
}}


== First setup of LXD/Initialisation ==
:If some but not all of /etc/subuid, /etc/subgid, newuidmap (path lookup) and newgidmap (path lookup) can be found on the system, LXD will fail the startup of any container until this is corrected as this shows a broken shadow setup.<ref>[https://documentation.ubuntu.com/lxd/en/latest/userns-idmap/#allowed-ranges]</ref>
Before using LXD for the first time as a user, you may initialize your LXD environment. As recommended earlier I am using btrfs for this installation.


{{console|body=
As noted above, it is no longer true that LXD will allocate subuids for the root user in all cases.  A good default configuration (and what would be used if the conditions above were not met) is that given by the following files on the root filesystem:
$##i## lxd init
Do you want to configure a new storage pool (yes/no) [default=yes]? yes
Name of the new storage pool [default=default]: default
Name of the storage backend to use (dir, btrfs, lvm) [default=dir]: btrfs
Create a new BTRFS pool (yes/no) [default=yes]? yes
Would you like to use an existing block device (yes/no) [default=no]? no
Would you like to create a new subvolume for the BTRFS storage pool (yes/no) [default=yes]: yes
Would you like LXD to be available over the network (yes/no) [default=no]? no
Would you like stale cached images to be updated automatically (yes/no) [default=yes]? no
Would you like to create a new network bridge (yes/no) [default=yes]? yes
What should the new bridge be called [default=lxdbr0]? lxdbr0
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? auto
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]? auto
LXD has been successfully configured.
}}


What this does is it creates btrfs subvolumes like this:
</translate>
{{console|body=
{{file|name=/etc/subuid|body=
$##i## btrfs sub list .
root:1000000:1000000000
ID 260 gen 1047 top level 5 path rootfs
ID 280 gen 1046 top level 260 path var/lib/lxd/storage-pools/default
ID 281 gen 1043 top level 280 path var/lib/lxd/storage-pools/default/containers
ID 282 gen 1044 top level 280 path var/lib/lxd/storage-pools/default/snapshots
ID 283 gen 1045 top level 280 path var/lib/lxd/storage-pools/default/images
ID 284 gen 1046 top level 280 path var/lib/lxd/storage-pools/default/custom
}}
}}
<translate>


It also creates new network interface for you:
</translate>
{{console|body=
{{file|name=/etc/subgid|body=
$##i## ip a list dev lxdbr0
root:1000000:1000000000
8: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether d2:9b:70:f2:8f:6f brd ff:ff:ff:ff:ff:ff
    inet 10.250.237.1/24 scope global lxdbr0
      valid_lft forever preferred_lft forever
    inet 169.254.59.23/16 brd 169.254.255.255 scope global lxdbr0
      valid_lft forever preferred_lft forever
    inet6 fd42:efd8:662e:3184::1/64 scope global
      valid_lft forever preferred_lft forever
    inet6 fe80::caf5:b7ed:445e:b112/64 scope link
      valid_lft forever preferred_lft forever
}}
}}
<translate>


And last but not least it also generates iptables rules for you:
<!--T:12-->
{{console|body=
The format of both of these files are "user":"start":"count". Meaning that the {{c|root}} user will be allocated "count" IDs starting at the position "start". The reason why LXD does this is because
$##i## iptables -L
these extra IDs will be used to isolate containers from the host processes and optionally from each other, by using different offsets so that their UID and GIDs will not overlap. For some systems (those lacking `newuidmap` and `newgidmap`, according to the documentation), LXD 5 now
Chain INPUT (policy ACCEPT)
has these settings "baked in". For more information on subids and subgids, see [[LXD/What are subuids and subgids?|What are subuids and subgids?]].
target    prot opt source              destination
ACCEPT    tcp  --  anywhere            anywhere            tcp dpt:domain /* generated for LXD network lxdbr0 */
ACCEPT    udp  --  anywhere            anywhere            udp dpt:domain /* generated for LXD network lxdbr0 */
ACCEPT    udp  --  anywhere            anywhere            udp dpt:bootps /* generated for LXD network lxdbr0 */


Chain FORWARD (policy ACCEPT)
==== LXD-in-LXD ==== <!--T:142-->
target    prot opt source              destination
ACCEPT    all  -- anywhere            anywhere            /* generated for LXD network lxdbr0 */
ACCEPT    all  -- anywhere            anywhere            /* generated for LXD network lxdbr0 */


Chain OUTPUT (policy ACCEPT)
<!--T:143-->
target    prot opt source              destination
After the initial setup, the only time you will now need to edit {{f|/etc/subuid}} and {{f|/etc/subgid}} now is if you are running "LXD-in-LXD". In this case, the inner LXD (within the container) will need to reduce these
ACCEPT    tcp  -- anywhere            anywhere            tcp spt:domain /* generated for LXD network lxdbr0 */
subuid and subgid mappings as the full range will not be available. This should be possible by simply using the following settings within your containerized LXD instance:
ACCEPT    udp  -- anywhere            anywhere            udp spt:domain /* generated for LXD network lxdbr0 */
ACCEPT    udp  --  anywhere            anywhere            udp spt:bootps /* generated for LXD network lxdbr0 */


$##i## iptables -L -t nat
</translate>
Chain PREROUTING (policy ACCEPT)
{{file|name=/etc/subuid|body=
target    prot opt source              destination
root:65537:70000
 
Chain INPUT (policy ACCEPT)
target    prot opt source              destination
 
Chain OUTPUT (policy ACCEPT)
target    prot opt source              destination
 
Chain POSTROUTING (policy ACCEPT)
target    prot opt source              destination
MASQUERADE  all  --  10.250.237.0/24    !10.250.237.0/24      /* generated for LXD network lxdbr0 */
 
$##i## iptables -L -t mangle
Chain PREROUTING (policy ACCEPT)
target    prot opt source              destination
 
Chain INPUT (policy ACCEPT)
target    prot opt source              destination
 
Chain FORWARD (policy ACCEPT)
target    prot opt source              destination
 
Chain OUTPUT (policy ACCEPT)
target    prot opt source              destination
 
Chain POSTROUTING (policy ACCEPT)
target    prot opt source              destination
CHECKSUM  udp  --  anywhere            anywhere            udp dpt:bootpc /* generated for LXD network lxdbr0 */ CHECKSUM fill
}}
}}
<translate>


Some other things done by the initialization and starting of the LXD daemon are:
</translate>
* dnsmasq listening on lxdbr0
{{file|name=/etc/subgid|body=
* ...
root:65537:70000
 
== Finishing up the setup of LXD ==
There are still some things that you need to do manually. We need to setup subuid and subgid values for our containers to use. And for using non-systemd containers we will also need app-admin/cgmanager so emerge and start it now.
 
{{console|body=
###i## rc-update add lxd default
###i## rc-update add cgmanager default
###i## rc
}}
}}
<translate>


== Containers, snapshots and images ==
<!--T:144-->
'''Containers''' in LXD are made of:
If you are not using advanced features of LXD, your LXD-in-LXD instance should now have sufficient id mappings to isolate container-containers from the host-container. The only remaining
* A filesystem (rootfs)
step for LXD-in-LXD would be to allow the host-container to nest:
* A list of configuration options, including resource limits, environment, security options and more
* A bunch of devices like disks, character/block unix devices and network interfaces
* A set of profiles the container inherits configuration from (see below)
* Some properties (container architecture, ephemeral or persistent and the name)
* Some runtime state (when using CRIU for checkpoint/restore)
 
Container '''snapshots''' as the name states snapshots of the container in time and cannot be modified in any way. It is worth noting that because snapshots can store the container runtime state, which gives us ability of “stateful” snapshots. That is, the ability to rollback the container including its cpu and memory state at the time of the snapshot.
 
LXD is '''image''' based, all LXD containers come from an image. Images are typically clean Linux distribution images similar to what you would use for a virtual machine or cloud instance. It is possible to “publish” a container, making an image from it which can then be used by the local or remote LXD hosts.
 
=== Our first image ===
Let's get our hands even more dirty and create our first image. We will be using a generic 64 bit Funtoo Linux image. Let's grab it, because we will need to modify it a little bit.
 
==== Modifying the stage3 image ====
So the steps that follow will download, extract the stage3 to a directory, and modify etc/rc.conf changing rc_sys value to lxc and comment out consoles in etc/inittab.


<!--T:145-->
{{console|body=
{{console|body=
###i## mkdir lxd-images
# ##i##lxc config set host-container security.nesting true
###i## cd lxd-images
###i## wget http://build.funtoo.org/funtoo-current/pure64/generic_64-pure64/stage3-latest.tar.xz
###i## mkdir rootfs
###i## tar -xaf stage3-latest.tar.xz -C rootfs/
###i## cd rootfs
###i## sed -i 's/^#rc_sys=""/rc_sys="lxc"/' etc/rc.conf
###i## sed -i '/^c[1-6]/s/^\(.*\)$/#\1/' etc/inittab
}}
}}


==== Metadata and templates ====
<!--T:146-->
Now we will create metadata and templates that will be used by the image and later by containers.
This will allow for host-container contain containers itself :)


{{console|body=
=== Initialization === <!--T:13-->
###i## cd ..
###i## mkdir templates
###i## <nowiki>echo hostname=\"{{ container.name }}\" > templates/hostname.tpl</nowiki>
}}


Create a file named metadata.yaml in current directory (lxd-images) with this contents:
<!--T:14-->
{{console|body=
To configure LXD, first we will need to start LXD. This can be done as follows:
architecture: x86_64
creation_date: 20170907
properties:
  architecture: x86_64
  description: Funtoo Current Generic Pure 64-bit
  name: funtoo-generic_64-pure64-funtoo-current-2016-12-10
  os: funtoo
  release: 1.0
  variant: current
templates:
  /etc/conf.d/hostname:
    template: hostname.tpl
    when:
      - create
      - copy
}}


==== Prepare the archive ====
</translate>
Now we recreate the archive and later use it to import into LXD.
{{console|body=
{{console|body=
###i## tar -caf lxd-image.tar.xz metadata.yaml templates rootfs
# ##i##/etc/init.d/lxd start
}}
}}
<translate>


==== Import the image ====
<!--T:15-->
After we have successfully recreated our archive we can now finally import it into LXD and start using it as our "seed" image for all our containers.
At this point, we can run {{c|lxd init}} to run a configuration wizard to set up LXD:
{{console|body=
###i## lxc image import lxd-image.tar.xz --alias funtoo
Image imported with fingerprint: e279c16d1a801b2bd1698df95e148e0a968846835f4769b24988f2eb3700100f
###i## lxc image ls
<nowiki>+--------+--------------+--------+------------------------------------+--------+----------+-----------------------------+
| ALIAS  | FINGERPRINT  | PUBLIC |            DESCRIPTION            |  ARCH  |  SIZE  |        UPLOAD DATE        |
+--------+--------------+--------+------------------------------------+--------+----------+-----------------------------+
| funtoo | e279c16d1a80 | no    | Funtoo Current Generic Pure 64-bit | x86_64 | 347.75MB | Sep 8, 2017 at 1:17am (UTC) |
+--------+--------------+--------+------------------------------------+--------+----------+-----------------------------+
</nowiki>}}
And there we have our very first Funtoo Linux image imported inside LXD.  You can reference the image through the alias or through the fingerprint. Aliases can be added also later.
 
Let me show you some basic usage then.
 
=== Creating your first container ===
First we have to add some subuid and subgid values for lxd to use. Add these lines to your /etc/subuid and /etc/subgid files on the host.


</translate>
{{console|body=
{{console|body=
###i## nano -w /etc/subuid
# ##i##lxd init
root:100000:65536
Would you like to use LXD clustering? (yes/no) [default=no]: ##i##↵
lxd:100000:65536
Do you want to configure a new storage pool? (yes/no) [default=yes]: ##i##↵
###i## nano -w /etc/subgid
Name of the new storage pool [default=default]: ##i##↵
root:100000:65536
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: ##i##dir ↵
lxd:100000:65536
Would you like to connect to a MAAS server? (yes/no) [default=no]: ##i##↵
Would you like to create a new local network bridge? (yes/no) [default=yes]: ##i##↵
What should the new bridge be called? [default=lxdbr0]: ##i##↵
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: ##i##
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: ##i##none ↵
Would you like LXD to be available over the network? (yes/no) [default=no]: ##i##↵
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] ##i##↵
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: ##i##↵
#
}}
}}
<translate>


The maps for “lxd” and “root” should always be kept in sync. LXD itself is restricted by the “root” allocation. The “lxd” entry is used to track what needs to be removed if LXD is uninstalled.
<!--T:16-->
As you can see, we chose all the default ''except'' for:
;storage pool: We opted for using a directory-based container storage rather than [[Special:MyLanguage/btrfs|btrfs]] volumes.  Directory-based may be the default option during LXD configuration -- it depends if you have btrfs-tools installed or not.
;IPv6 address: It is recommended you turn this off unless you are specifically wanting to play with IPv6 in your containers. It may cause {{c|dhcpcd}} in your container to only retrieve an IPv6 address if you leave it enabled. This is great if you have IPv6 working -- otherwise, you'll get a dud IPv6 address and no IPv4 address, and thus no network.


This isolates the users inside the container and even if they would escape the container they would get nobody's privileges on the host.
<!--T:125-->
{{Warning|As explained above, turn off IPv6 NAT in LXD unless you specifically intend to use it! It can confuse {{c|dhcpcd}}.}}


LXD daemon has to restart after making changes to subuid and subgid files.
<!--T:126-->
If you choose to output the ''YAML lxd init preseed'' configuration from the {{c|lxd init}} command above, here is a config example:


{{console|body=
<!--T:127-->
###i## service lxd restart
{{file|name=lxc_init_preseed.yaml|lang=YAML|desc=lxc init preseed config example|body=
config:
  images.auto_update_interval: "0"
networks:
- config:
    ipv4.address: auto
    ipv6.address: none
  description: ""
  name: lxdbr0
  type: ""
  project: default
storage_pools:
- config: {}
  description: ""
  name: default
  driver: dir
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: funtoo
      type: disk
  name: default
projects: []
cluster: null
}}
}}


So now we can init our first container. That is done using this command:
<!--T:128-->
Now, we should be able to run {{c|lxc image list}} and get a response from the LXD daemon:


</translate>
{{console|body=
{{console|body=
###i## lxc init funtoo c1
# ##i##lxc image list
Creating c1
+-------+-------------+--------+-------------+------+------+-------------+
###i##  lxc ls
{{!}} ALIAS {{!}} FINGERPRINT {{!}} PUBLIC {{!}} DESCRIPTION {{!}} ARCH {{!}} SIZE {{!}} UPLOAD DATE {{!}}
<nowiki>+------+---------+------+------+------------+-----------+
+-------+-------------+--------+-------------+------+------+-------------+
| NAME |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
#
+------+---------+------+------+------------+-----------+
| c1  | STOPPED |      |      | PERSISTENT | 0        |
+------+---------+------+------+------------+-----------+
</nowiki>}}
 
==== Profiles intermezzo ====
LXD has the ability to change quite a few container settings, including resource limitation, control of container startup and a variety of device pass-through options using what is called profiles. Let me show you how can this be used.
 
This is the default profile that gets inherited by all containers.
{{console|body=
###i## lxc profile list
<nowiki>+---------+---------+
|  NAME  | USED BY |
+---------+---------+
| default | 1      |
+---------+---------+
</nowiki>
###i##  lxc profile show default
config: {}
description: Default LXD profile
devices:
  eth0:
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/containers/c1
}}
}}
<translate>
<!--T:18-->
If you are able to do this, you have successfully set up the core parts of LXD! Note that we used the command {{c|lxc}} and not {{c|lxd}} like we did for {{c|lxd init}} -- from this point forward, you will use the {{c|lxc}} command. Don't let this
confuse you -- the {{c|lxc}} command is the primary command-line tool for working with LXD containers.


Now let's make a profile specific for our funtoo containers. It will include shared meta-repo and some other goodies.
<!--T:19-->
Above, you can see that no images are installed. Images are installable snapshots of containers that we can use to create new containers ourselves. So, as a first step, let's go ahead and grab an image we can use. You will want to browse https://build.funtoo.org for an LXD image that will work on your computer hardware. For example, I was able to download
the following file using {{c|wget}}:


</translate>
{{console|body=
{{console|body=
###i## lxc profile create prf-funtoo
# ##i##wget https://build.funtoo.org/1.4-release-std/x86-64bit/amd64-zen2/2022-04-13/lxd-amd64-zen2-1.4-release-std-2022-04-13.tar.xz
Profile prf-funtoo created
###i##  lxc profile edit prf-funtoo
### This is a yaml representation of the profile.
### Any line starting with a '# will be ignored.
###
### A profile consists of a set of configuration items followed by a set of
### devices.
###
### An example would look like:
### name: onenic
### config:
###  raw.lxc: lxc.aa_profile=unconfined
### devices:
###  eth0:
###    nictype: bridged
###    parent: lxdbr0
###    type: nic
###
### Note that the name is shown but cannot be changed
 
config: 
  raw.lxc: lxc.mount.auto = proc sys cgroup
description: "LXD profile for Funtoo-based containers"
devices: 
  portage:
    path: var/git
    source: /var/git
    type: disk
name: prf-funtoo
used_by: []
}}
}}
<translate>


<!--T:20-->
Once downloaded, this image can be installed using the following command:
</translate>
{{console|body=
{{console|body=
###i## lxc profile add c1 prf-funtoo
# ##i##lxc image import lxd-amd64-zen2-1.4-release-std-2022-04-13.tar.xz --alias funtoo
Profile prf-funtoo added to c1
Image imported with fingerprint: fe4d27fb31bfaf3bd4f470e0ea43d26a6c05991de2a504b9e0a3b1a266dddc69
}}
}}
<translate>


=== Starting our first container ===
<!--T:21-->
After we have done all these customizations we can now start our container.
Now you will see the image available in our image list:


</translate>
{{console|body=
{{console|body=
###i## lxc start c1
# ##i##lxc image list
+--------+--------------+--------+-----------------------------------------+--------------+-----------+----------+------------------------------+
| ALIAS  | FINGERPRINT  | PUBLIC |              DESCRIPTION              | ARCHITECTURE |  TYPE    |  SIZE  |        UPLOAD DATE          |
+--------+--------------+--------+-----------------------------------------+--------------+-----------+----------+------------------------------+
| funtoo | b8eaa7e30c14 | no    | 1.4 Release Zen2 64bit [std] 2022-04-13 | x86_64      | CONTAINER | 342.13MB | Apr 29, 2022 at 9:36pm (UTC) |
+--------+--------------+--------+-----------------------------------------+--------------+-----------+----------+------------------------------+
#
}}
}}
<translate>
=== First Container === <!--T:24-->


And now we can gain shell inside our container.
<!--T:25-->
It is now time to launch our first container. This can be done as follows:


</translate>
{{console|body=
{{console|body=
###i## lxc exec c1 bash
# ##i##lxc launch funtoo testcontainer
Creating testcontainer
Starting testcontainer
}}
}}
<translate>


Now you should see a different prompt starting with
<!--T:26-->
We can now see the container running via {{c|lxc list}}:


</translate>
{{console|body=
{{console|body=
c1 ~ #
# ##i##lxc list
+---------------+---------+------+-----------------------------------------------+------------+-----------+
{{!}} NAME          {{!}}  STATE  {{!}} IPV4 {{!}}                    IPV6                      {{!}}    TYPE    {{!}} SNAPSHOTS {{!}}
+---------------+---------+------+-----------------------------------------------+------------+-----------+
{{!}} testcontainer {{!}} RUNNING {{!}}      {{!}} fd42:8063:81cb:988c:216:3eff:fe2a:f901 (eth0) {{!}} PERSISTENT {{!}}          {{!}}
+---------------+---------+------+-----------------------------------------------+------------+-----------+
#
}}
}}
<translate>
<!--T:29-->
By default, our new container {{c|testcontainer}} will use the default profile, which will connect an {{c|eth0}} interface in the container to NAT, and will also use our directory-based LXD storage pool. We can now enter the container as follows:


If we run top or ps for example we will see only the processes of the container.
</translate>
 
{{console|body=
{{console|body=
c1 ~ # ps aux
# ##i##lxc exec testcontainer -- su --login
USER      PID %CPU %MEM    VSZ  RSS TTY      STAT START  TIME COMMAND
%testcontainer%
root        1  0.0  0.0  4248  748 ?        Ss+  13:20  0:00 init [3]
root      266  0.0  0.0  30488  472 ?        Ss  13:20  0:00 /usr/sbin/sshd
root      312  0.2  0.0  17996  3416 ?        Ss  13:29  0:00 bash
root      317  0.0  0.0  19200  2260 ?        R+  13:29  0:00 ps aux
c1 ~ #
}}
}}
<translate>


As you can see only the container's processes are shown. User running the processes is root here. What happens if we search for all sshd processes for example on the host box?
<!--T:30-->
As you might have noticed, we do not yet have any IPv4 networking configured. While LXD has set up a bridge and NAT for us, along with a DHCP server to query, we actually need to use {{c|dhcpcd}} to query for an IP address, so let's get that set up:


</translate>
{{console|body=
{{console|body=
###i## <nowiki>ps aux|grep ssh
%testcontainer% ##i##echo "template=dhcpcd" > /etc/conf.d/netif.eth0
root    14505  0.0  0.0  30564  1508 ?        Ss  Sep07  0:00 /usr/sbin/sshd 
%testcontainer% ##i##cd /etc/init.d
100000  25863 0.0 0.0 30488  472 ?        Ss  15:20  0:00 /usr/sbin/sshd 
%testcontainer% ##i##ln -s netif.tmpl netif.eth0
root    29487 0.0 0.0  8324  828 pts/2    S+  15:30  0:00 grep --colour=auto sshd</nowiki>
%testcontainer% ##i##rc-update add netif.eth0 default
###i##
  * service netif.eth0 added to runlevel default
%testcontainer% ##i##rc
  * rc is deprecated, please use openrc instead.
  * Caching service dependencies ...                            [ ##g##ok ##!g##]
  * Starting DHCP Client Daemon ...                             [ ##g##ok ##!g##]
  * Network dhcpcd eth0 up ...                                  [ ##g##ok ##!g##]
%testcontainer% ##i##
}}
}}
<translate>


So as you can see, the sshd process is running under user with uid 100000 on the host machine and has a different PID.
<!--T:31-->
 
You can now see that {{c|eth0}} has a valid IPv4 address:
=== Getting information about your containers ===
==== Listing containers ====
{{console|body=
###i##  lxc ls
<nowiki>+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |        IPV4        |                    IPV6                      |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| c1  | RUNNING | 10.214.101.79 (eth0) | fd42:156d:4593:a619:8619:546e:43f:2089 (eth0) | PERSISTENT | 0        |
|      |        |                      | fd42:156d:4593:a619:216:3eff:fe4a:3d4f (eth0) |            |          |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
</nowiki>}}
 
==== Container details ====
{{console|body=
###i## lxc info c1
Name: c1
Remote: unix://
Architecture: x86_64
Created: 2017/09/08 02:07 UTC
Status: Running
Type: persistent
Profiles: default, prf-funtoo
Pid: 6366
Ips:
  eth0: inet    10.214.101.79  vethFG4HXG
  eth0: inet6  fd42:156d:4593:a619:8619:546e:43f:2089  vethFG4HXG
  eth0: inet6  fd42:156d:4593:a619:216:3eff:fe4a:3d4f  vethFG4HXG
  eth0: inet6  fe80::216:3eff:fe4a:3d4f        vethFG4HXG
  lo:  inet    127.0.0.1
  lo:  inet6  ::1
Resources:
  Processes: 6
  CPU usage:
    CPU usage (in seconds): 25
  Memory usage:
    Memory (current): 69.01MB
    Memory (peak): 258.92MB
  Network usage:
    eth0:
      Bytes received: 83.65kB
      Bytes sent: 9.44kB
      Packets received: 188
      Packets sent: 93
    lo:
      Bytes received: 0B
      Bytes sent: 0B
      Packets received: 0
      Packets sent: 0
}}


==== Container configuration ====
</translate>
{{console|body=
{{console|body=
###i##  lxc config edit c1
%testcontainer% ##i##ifconfig
### This is a yaml representation of the configuration.
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
### Any line starting with a '# will be ignored.
        inet 10.212.194.17  netmask 255.255.255.0  broadcast 10.212.194.255
###
        inet6 fd42:8063:81cb:988c:25ea:b5bd:603d:8b0d  prefixlen 64 scopeid 0x0<global>
### A sample configuration looks like:
        inet6 fe80::216:3eff:fe2a:f901  prefixlen 64  scopeid 0x20<link>
### name: container1
        ether 00:16:3e:2a:f9:01  txqueuelen 1000  (Ethernet)
### profiles:
        RX packets 45  bytes 5385 (5.2 KiB)
### - default
        RX errors 0  dropped 0  overruns 0  frame 0
### config:
        TX packets 20  bytes 2232 (2.1 KiB)
###  volatile.eth0.hwaddr: 00:16:3e:e9:f8:7f
        TX errors 0  dropped 0 overruns 0 carrier 0 collisions 0
### devices:
###  homedir:
###    path: /extra
###    source: /home/user
###    type: disk
### ephemeral: false
###
### Note that the name is shown but cannot be changed
 
architecture: x86_64
config:
  image.architecture: x86_64
  image.description: Funtoo Current Generic Pure 64-bit
  image.name: funtoo-generic_64-pure64-funtoo-current-2016-12-10
  image.os: funtoo
  image.release: "1.0"
  image.variant: current
  volatile.base_image: e279c16d1a801b2bd1698df95e148e0a968846835f4769b24988f2eb3700100f
  volatile.eth0.hwaddr: 00:16:3e:4a:3d:4f
  volatile.eth0.name: eth0
  volatile.idmap.base: "0"
  volatile.idmap.next: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.idmap: '[{"Isuid":true,"Isgid":false,"Hostid":100000,"Nsid":0,"Maprange":65536},{"Isuid":false,"Isgid":true,"Hostid":100000,"Nsid":0,"Maprange":65536}]'
  volatile.last_state.power: RUNNING
devices: {}
ephemeral: false
profiles:
- default
- prf-funtoo
stateful: false
description: ""
}}
}}
<translate>


=== Managing files ===
<!--T:129-->
 
What happened is that LXD set up a DHCP server for us (dnsmasq) running on our private container network, and automatically offers IP addresses to our containers. It also configured iptables for us to NAT the connection so that outbound Internet access should magically work.
=== Snapshots ===
You should also be able to see this IPv4 address listed in the container list when you type {{c|lxc list}} on your host system.
 
=== Cloning, copying and moving containers ===
 
=== Resource control ===
LXD offers a variety of resource limits. Some of those are tied to the container itself, like memory quotas, CPU limits and I/O priorities. Some are tied to a particular device instead, like I/O bandwidth or disk usage limits.


As with all LXD configuration, resource limits can be dynamically changed while the container is running. Some may fail to apply, for example if setting a memory value smaller than the current memory usage, but LXD will try anyway and report back on failure.
=== Network Troubleshooting === <!--T:130-->


All limits can also be inherited through profiles in which case each affected container will be constrained by that limit. That is, if you set limits.memory=256MB in the default profile, every container using the default profile (typically all of them) will have a memory limit of 256MB.
<!--T:131-->
{{warning|Proceed with caution. These are advanced LXC commands that can modify the state of your LXD local network setup. In extreme scenarios, all of your LXD containers' networks NAT routing can easily break all if the wrong LXC config key value is changed}}


==== Disk ====
<!--T:132-->
Setting a size limit on the container’s filesystem and have it enforced against the container. Right now LXD only supports disk limits if you’re using the ZFS or btrfs storage backend.
Note that if you are having issues with your container getting an IPv4 address via DHCP, make sure that you turn IPv6 off in LXD. Do this
 
by running:
To set a disk limit (requires btrfs or ZFS):


<!--T:133-->
{{console|body=
{{console|body=
###i## lxc config device set c1 root size 20GB
###i## lxc network edit lxdbr0
}}
}}


==== CPU ====
<!--T:134-->
To just limit a container to any 2 CPUs, do:
Then, change {{c|ipv6.nat}} YAML key value to {{c|"false"}} and restart LXD and the test container:


<!--T:135-->
{{console|body=
{{console|body=
###i## lxc config set c1 limits.cpu 2
###i## /etc/init.d/lxd restart
###i## lxc restart testcontainer
}}
}}


To pin to specific CPU cores, say the second and fourth:
<!--T:136-->
This should resolve the issue.
 
<!--T:137-->
{{important|If you have initialized your LXD cluster by turning off IPv6 with the ''What IPv6 address should be used?'' option set to {{c|none}}, then the {{c|ipv6.nat}} will not even be present in our LXC local network's {{c|lxdbr0}} bridge interface.


{{console|body=
<!--T:138-->
###i## lxc config set c1 limits.cpu 1,3
Be careful not to tamper with the {{c|ipv4.nat}} setting or all LXD container NAT routing will break, meaning no network traffic within running and new LXD containers will be able to route to external to the Internet!}}
}}


More complex pinning ranges like this works too:
<!--T:139-->
Here is some example YAML of a default {{c|lxdbr0}} LXC local network bridge device for reference:


{{console|body=
<!--T:140-->
###i## lxc config set c1 limits.cpu 0-3,7-11
{{file|name=lxc_lxdbr0.yaml|lang=YAML|desc=lxc network edit lxdbr0|body=
config:
  ipv4.address: 10.239.139.1/24
  ipv4.nat: "true"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/profiles/default
- /1.0/instances/funtoo-livestream
managed: true
status: Created
locations:
- none
}}
}}


==== Memory ====
=== Container Console setup === <!--T:130-->
To apply a straightforward memory limit run:


{{console|body=
LXD containers are created with a special `/dev` where there is only one tty console `/dev/console` that could be used to see bootstrap phase or to enter in the container from the container console.
###i## lxc config set c1 limits.memory 256MB
}}


(The supported suffixes are kB, MB, GB, TB, PB and EB)
This could be done in two way:


To turn swap off for the container (defaults to enabled):
* on launch phase:


<!--T:133-->
{{console|body=
{{console|body=
###i## lxc config set c1 limits.memory.swap false
###i## lxc launch -p default -p net macaroni:funtoo/next-stage3 testcontainer -e  --console
}}
}}
To tell the kernel to swap this container’s memory first:


{{console|body=
* through the `lxc console` command:
###i## lxc config set c1 limits.memory.swap.priority 0
}}
And finally if you don’t want hard memory limit enforcement:


<!--T:133-->
{{console|body=
{{console|body=
###i## lxc config set c1 limits.memory.enforce soft
###i## lxc console testcontainer
}}
}}


==== Network ====
To use correctly this feature is needed setup correctly the `/etc/inittab` file and disable the standard c* entries:
==== Block I/O ====
 
=== Resource limits using profile - Funtoo Containers example ===
So I am going to create 3 profiles to mimic the resource limits for current Funtoo Containers.
 
{{TableStart}}
<tr class="danger"><th>Price</th><th>RAM</th><th>CPU Threads</th><th>Disk Space</th><th>Sign Up</th></tr>
<tr><td>'''$15/mo'''</td><td>4GB</td><td>6 CPU Threads</td><td>50GB</td><td>[https://funtoo.chargebee.com/hosted_pages/plans/container_small Sign Up! (small)]</td></tr>
<tr><td>'''$30/mo'''</td><td>12GB</td><td>12 CPU Threads</td><td>100GB</td><td>[https://funtoo.chargebee.com/hosted_pages/plans/container_medium Sign Up! (medium)]</td></tr>
<tr><td>'''$45/mo'''</td><td>48GB</td><td>24 CPU Threads</td><td>200GB</td><td>[https://funtoo.chargebee.com/hosted_pages/plans/container_large Sign Up! (large)]</td></tr>
{{TableEnd}}
 
I am going to create one profile and copy/edit it for the remaining two options.


<!--T:133-->
{{console|body=
{{console|body=
###i## lxc profile create res-small
###i## sed -i /etc/inittab <nowiki>-e 's|^#x1|x1|g' -e '/^c[0-9].*/d'</nowiki>
###i## lxc profile edit res-small
config:
  limits.cpu: "6"
  limits.memory: 4GB
description: Small Variant of Funtoo Containers
devices:
  root:
    path: /
    pool: default
    size: 50GB
    type: disk
name: small
used_by: []
###i## lxc profile copy res-small res-medium
###i## lxc profile copy res-small res-large
###i## lxc profile set res-medium limits.cpu 12
###i## lxc profile set res-medium limits.memory 12GB
###i## lxc profile device set res-medium root size 100GB
###i## lxc profile set res-large limits.cpu 24
###i## lxc profile set res-large limits.memory 48GB
###i## lxc profile device set res-large root size 200GB
}}
Now let's create a container and assign the res-small and funtoo profiles to it.
{{console|body=
###i## lxc init funtoo c-small
###i## lxc profile assign c-small res-small
###i## lxc profile add c-small funtoo
}}
}}


=== Image manipulations ===
The images exposed over the Simplestreams Server of Funtoo Macaroni are already configured correctly and ready to use.


=== Remote hosts ===
=== Finishing Steps === <!--T:141-->


== Running systemd container on a non-systemd host ==
<!--T:32-->
To use systemd in the container, a recent enough (>=4.6) kernel version with support for cgroup namespaces is needed. Additionally the host needs to have a <code>name=systemd</code> cgroup hierarchy mounted:
Assuming your network is now working, you are ready to start using your new Funtoo container. Time to have some fun! Go ahead and run {{c|ego sync}} and then emerge your favorite things:
</translate>
{{console|body=
{{console|body=
###i## mkdir -p /sys/fs/cgroup/systemd
%testcontainer% ##i##ego sync
###i## mount -t cgroup -o none,name=systemd systemd /sys/fs/cgroup/systemd
\##g##Syncing meta-repo
}}
Cloning into '/var/git/meta-repo'...
{{note|Doing so does not require running <code>systemd</code> on the host, it only allows to run systemd correctly inside the container(s) .}}


If you want to get <code>systemd</code> hierarchy mounted automatically on system startup, using <code>/etc/fstab</code> will not work, but the
{{Package|dev-libs/libcgroup}} can be used for this. First you needed to edit the <code>/etc/cgroup/cgconfig.conf</code> and add:
{{file|name=/etc/cgroup/cgconfig.conf|body=mount {
    "name=systemd" = /sys/fs/cgroup/systemd;
}
}}
}}
Then you need to start the cgconfig daemon:
<translate>
{{console|body=
<!--T:121-->
###i## rc-service cgconfig start
[[Category:Containers]]
}}
[[Category:LXD]]
The daemon can be started as needed, or automatically at system start by simply adding it to default group:
[[Category:Official Documentation]]
{{console|body=
[[Category:First Steps]]
###i## rc-update add cgconfig default
</translate>
}}
 
== List of tested and working images ==
These are images from the https://images.linuxcontainers.org repository available by default in lxd. You can
list all available images by typing following command (beware the list is very long):
{{console|body=
###i## lxc image list images:
<nowiki>+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
|              ALIAS              | FINGERPRINT  | PUBLIC |              DESCRIPTION                |  ARCH  |  SIZE  |          UPLOAD DATE          |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
| alpine/3.3 (3 more)            | ef69c8dc37f6 | yes    | Alpine 3.3 amd64 (20171018_17:50)        | x86_64  | 2.00MB  | Oct 18, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
| alpine/3.3/armhf (1 more)      | 5ce4c80edcf3 | yes    | Alpine 3.3 armhf (20170103_17:50)        | armv7l  | 1.53MB  | Jan 3, 2017 at 12:00am (UTC)  |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
| alpine/3.3/i386 (1 more)        | cd1700cb7c97 | yes    | Alpine 3.3 i386 (20171018_17:50)        | i686    | 1.84MB  | Oct 18, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
| alpine/3.4 (3 more)            | bd4f1ccfabb5 | yes    | Alpine 3.4 amd64 (20171018_17:50)        | x86_64  | 2.04MB  | Oct 18, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
| alpine/3.4/armhf (1 more)      | 9fe7c201924c | yes    | Alpine 3.4 armhf (20170111_20:27)        | armv7l  | 1.58MB  | Jan 11, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
| alpine/3.4/i386 (1 more)        | 188a31315773 | yes    | Alpine 3.4 i386 (20171018_17:50)        | i686    | 1.88MB  | Oct 18, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
| alpine/3.5 (3 more)            | 63bebc672163 | yes    | Alpine 3.5 amd64 (20171018_17:50)        | x86_64  | 1.70MB  | Oct 18, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
| alpine/3.5/i386 (1 more)        | 48045e297515 | yes    | Alpine 3.5 i386 (20171018_17:50)        | i686    | 1.73MB  | Oct 18, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
...
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
|                                | fd95a7a754a0 | yes    | Alpine 3.5 amd64 (20171016_17:50)        | x86_64  | 1.70MB  | Oct 16, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
|                                | fef66668f5a2 | yes    | Debian stretch arm64 (20171016_22:42)    | aarch64 | 96.56MB  | Oct 16, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
|                                | ff18aa2c11d7 | yes    | Opensuse 42.3 amd64 (20171017_00:53)    | x86_64  | 58.92MB  | Oct 17, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
|                                | ff4ef0d824b6 | yes    | Ubuntu zesty s390x (20171017_03:49)      | s390x  | 86.88MB  | Oct 17, 2017 at 12:00am (UTC) |
+---------------------------------+--------------+--------+------------------------------------------+---------+----------+-------------------------------+
</nowiki>}}
 
These are the images that are known to work with current LXD setup on Funtoo Linux:
{| class="wikitable sortable"
|-
! Image  !! Init !! Status
|-
| CentOS 7 || systemd || Working
|-
| Debian Jessie || systemd || Working perfectly
|-
| Debian Stretch || systemd || Working
|-
| Fedora 25 || systemd || Working
|-
| Fedora 24 || systemd || Working
|-
| Oracle 7 || systemd || Working perfectly
|-
| OpenSUSE 42.2 || systemd || Working
|-
| OpenSUSE 42.3 || systemd || Working
|-
| Ubuntu Xen || systemd || Working
|-
| Ubuntu Zes || systemd || Working
|-
| Alpine 3.3 || OpenRC || Working
|-
| Alpine 3.4 || OpenRC || Working
|-
| Alpine 3.5 || OpenRC || Working
|-
| Alpine 3.6 || OpenRC || Working
|-
| Alpine Edge || OpenRC || Working
|-
| Archlinux || ? || ?
|-
| CentOS 6 || ? || ?
|-
| Debian Buster || ? || ?
|-
| Debian Sid || ? || ?
|-
| Debian Wheezy || ? || ?
|-
| Gentoo || ? || ?
|-
| Oracle 6 || ? || ?
|-
| Plamo 5 || ? || ?
|-
| Plamo 6 || ? || ?
|-
| Sabayon || ? || ?
|-
| Ubuntu Artful || ? || ?
|-
| Ubuntu Core 16 || ? || ?
|-
| Ubuntu Precise || ? || ?
|-
| Ubuntu Trusty || ? || ?
|-
| Ubuntu Xenial || ? || ?
|-
| Ubuntu Zesty || ? || ?
|}
 
* Working perfectly - initsystem (systemd) started and all units running
* Working - initsystem (systemd) started some units failed
 
[[Category:Virtualization]]

Latest revision as of 20:56, February 17, 2024

Other languages:

Introduction

   Important

Please note that if you plan to use LXD on a laptop, you are likely using WiFi and NetworkManager, and the steps below will not work for you bridge setup. Please see LXD/Laptop Network Setup for important differences to allow you to use LXD in 'dev mode' for local use of containers for development.

LXD is a container "hypervisor" designed to provide an easy set of tools to manage Linux containers, and its development is currently being led by employees at Canonical. You can learn more about the project in general at https://linuxcontainers.org/lxd/.

LXD is currently used for container infrastructure for Funtoo Containers and is also very well-supported under Funtoo Linux. For this reason, it's recommended that you check out LXD and see what it can do for you.

Basic Setup on Funtoo

The following steps will show you how to set up a basic LXD environment under Funtoo Linux. This environment will essentially use the default LXD setup -- a will be created called lxdbr0 which will use NAT to provide Internet access to your containers. In addition, a default storage pool will be created that will simply use your existing filesystem's storage, creating a directory at /var/lib/lxd/storage-pools/default to store any containers you create. More sophisticated configurations are possible that use dedicated network bridges connected to physical interfaces without NAT, as well as dedicated storage pools that use ZFS and btrfs -- however, these types of configurations are generally overkill for a developer workstation and should only be attempted by advanced users. So we won't cover them here.

Requirements

This section will guide you through setting up the basic requirements for creating an LXD environment.

The first step is to emerge LXD and its dependencies. Perform the following:

root # emerge -a lxd

Once LXD is done emerging, we will want to enable it to start by default:

root # rc-update add lxd default

In addition, we will want to set up the following files. /etc/security/limits.conf should be modified to have the following lines in it:

   /etc/security/limits.conf
*       soft    nofile  1048576
*       hard    nofile  1048576
root    soft    nofile  1048576
root    hard    nofile  1048576
*       soft    memlock unlimited
*       hard    memlock unlimited
# End of file

Next, we come to the concept of "subuid" and "subgid". Typically, a user will get one user id and one group id. Subids and subgids allow us to assign additional UIDs and GIDs to a user for their own uses. Per the documentation:

If some but not all of /etc/subuid, /etc/subgid, newuidmap (path lookup) and newgidmap (path lookup) can be found on the system, LXD will fail the startup of any container until this is corrected as this shows a broken shadow setup.[1]

As noted above, it is no longer true that LXD will allocate subuids for the root user in all cases. A good default configuration (and what would be used if the conditions above were not met) is that given by the following files on the root filesystem:

   /etc/subuid
root:1000000:1000000000
   /etc/subgid
root:1000000:1000000000

The format of both of these files are "user":"start":"count". Meaning that the root user will be allocated "count" IDs starting at the position "start". The reason why LXD does this is because these extra IDs will be used to isolate containers from the host processes and optionally from each other, by using different offsets so that their UID and GIDs will not overlap. For some systems (those lacking `newuidmap` and `newgidmap`, according to the documentation), LXD 5 now has these settings "baked in". For more information on subids and subgids, see What are subuids and subgids?.

LXD-in-LXD

After the initial setup, the only time you will now need to edit /etc/subuid and /etc/subgid now is if you are running "LXD-in-LXD". In this case, the inner LXD (within the container) will need to reduce these subuid and subgid mappings as the full range will not be available. This should be possible by simply using the following settings within your containerized LXD instance:

   /etc/subuid
root:65537:70000
   /etc/subgid
root:65537:70000

If you are not using advanced features of LXD, your LXD-in-LXD instance should now have sufficient id mappings to isolate container-containers from the host-container. The only remaining step for LXD-in-LXD would be to allow the host-container to nest:

root # lxc config set host-container security.nesting true

This will allow for host-container contain containers itself :)

Initialization

To configure LXD, first we will need to start LXD. This can be done as follows:

root # /etc/init.d/lxd start

At this point, we can run lxd init to run a configuration wizard to set up LXD:

root # lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: dir ↵
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none ↵
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
root #

As you can see, we chose all the default except for:

storage pool
We opted for using a directory-based container storage rather than btrfs volumes. Directory-based may be the default option during LXD configuration -- it depends if you have btrfs-tools installed or not.
IPv6 address
It is recommended you turn this off unless you are specifically wanting to play with IPv6 in your containers. It may cause dhcpcd in your container to only retrieve an IPv6 address if you leave it enabled. This is great if you have IPv6 working -- otherwise, you'll get a dud IPv6 address and no IPv4 address, and thus no network.
   Warning

As explained above, turn off IPv6 NAT in LXD unless you specifically intend to use it! It can confuse dhcpcd.

If you choose to output the YAML lxd init preseed configuration from the lxd init command above, here is a config example:

   lxc_init_preseed.yaml (YAML source code) - lxc init preseed config example
config:
  images.auto_update_interval: "0"
networks:
- config:
    ipv4.address: auto
    ipv6.address: none
  description: ""
  name: lxdbr0
  type: ""
  project: default
storage_pools:
- config: {}
  description: ""
  name: default
  driver: dir
profiles:
- config: {}
  description: ""
  devices:
    eth0:
      name: eth0
      network: lxdbr0
      type: nic
    root:
      path: /
      pool: funtoo
      type: disk
  name: default
projects: []
cluster: null

Now, we should be able to run lxc image list and get a response from the LXD daemon:

root # lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
root #

If you are able to do this, you have successfully set up the core parts of LXD! Note that we used the command lxc and not lxd like we did for lxd init -- from this point forward, you will use the lxc command. Don't let this confuse you -- the lxc command is the primary command-line tool for working with LXD containers.

Above, you can see that no images are installed. Images are installable snapshots of containers that we can use to create new containers ourselves. So, as a first step, let's go ahead and grab an image we can use. You will want to browse https://build.funtoo.org for an LXD image that will work on your computer hardware. For example, I was able to download the following file using wget:

root # wget https://build.funtoo.org/1.4-release-std/x86-64bit/amd64-zen2/2022-04-13/lxd-amd64-zen2-1.4-release-std-2022-04-13.tar.xz

Once downloaded, this image can be installed using the following command:

root # lxc image import lxd-amd64-zen2-1.4-release-std-2022-04-13.tar.xz --alias funtoo
Image imported with fingerprint: fe4d27fb31bfaf3bd4f470e0ea43d26a6c05991de2a504b9e0a3b1a266dddc69

Now you will see the image available in our image list:

root # lxc image list
+--------+--------------+--------+-----------------------------------------+--------------+-----------+----------+------------------------------+

First Container

It is now time to launch our first container. This can be done as follows:

root # lxc launch funtoo testcontainer
Creating testcontainer
Starting testcontainer

We can now see the container running via lxc list:

root # lxc list
+---------------+---------+------+-----------------------------------------------+------------+-----------+
| NAME          |  STATE  | IPV4 |                     IPV6                      |    TYPE    | SNAPSHOTS |
+---------------+---------+------+-----------------------------------------------+------------+-----------+
| testcontainer | RUNNING |      | fd42:8063:81cb:988c:216:3eff:fe2a:f901 (eth0) | PERSISTENT |           |
+---------------+---------+------+-----------------------------------------------+------------+-----------+
root #

By default, our new container testcontainer will use the default profile, which will connect an eth0 interface in the container to NAT, and will also use our directory-based LXD storage pool. We can now enter the container as follows:

root # lxc exec testcontainer -- su --login
testcontainer #

As you might have noticed, we do not yet have any IPv4 networking configured. While LXD has set up a bridge and NAT for us, along with a DHCP server to query, we actually need to use dhcpcd to query for an IP address, so let's get that set up:

testcontainer # echo "template=dhcpcd" > /etc/conf.d/netif.eth0
testcontainer # cd /etc/init.d
testcontainer # ln -s netif.tmpl netif.eth0
testcontainer # rc-update add netif.eth0 default
 * service netif.eth0 added to runlevel default
testcontainer # rc
 * rc is deprecated, please use openrc instead.
 * Caching service dependencies ...                             [ ok ]
 * Starting DHCP Client Daemon ...                              [ ok ]
 * Network dhcpcd eth0 up ...                                   [ ok ]
testcontainer # 

You can now see that eth0 has a valid IPv4 address:

testcontainer # ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.212.194.17  netmask 255.255.255.0  broadcast 10.212.194.255
        inet6 fd42:8063:81cb:988c:25ea:b5bd:603d:8b0d  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::216:3eff:fe2a:f901  prefixlen 64  scopeid 0x20<link>
        ether 00:16:3e:2a:f9:01  txqueuelen 1000  (Ethernet)
        RX packets 45  bytes 5385 (5.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 2232 (2.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

What happened is that LXD set up a DHCP server for us (dnsmasq) running on our private container network, and automatically offers IP addresses to our containers. It also configured iptables for us to NAT the connection so that outbound Internet access should magically work. You should also be able to see this IPv4 address listed in the container list when you type lxc list on your host system.

Network Troubleshooting

   Warning

Proceed with caution. These are advanced LXC commands that can modify the state of your LXD local network setup. In extreme scenarios, all of your LXD containers' networks NAT routing can easily break all if the wrong LXC config key value is changed

Note that if you are having issues with your container getting an IPv4 address via DHCP, make sure that you turn IPv6 off in LXD. Do this by running:

root # lxc network edit lxdbr0

Then, change ipv6.nat YAML key value to "false" and restart LXD and the test container:

root # /etc/init.d/lxd restart
root # lxc restart testcontainer

This should resolve the issue.

   Important

If you have initialized your LXD cluster by turning off IPv6 with the What IPv6 address should be used? option set to none, then the ipv6.nat will not even be present in our LXC local network's lxdbr0 bridge interface.

Be careful not to tamper with the ipv4.nat setting or all LXD container NAT routing will break, meaning no network traffic within running and new LXD containers will be able to route to external to the Internet!

Here is some example YAML of a default lxdbr0 LXC local network bridge device for reference:

   lxc_lxdbr0.yaml (YAML source code) - lxc network edit lxdbr0
config:
  ipv4.address: 10.239.139.1/24
  ipv4.nat: "true"
  ipv6.address: none
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/profiles/default
- /1.0/instances/funtoo-livestream
managed: true
status: Created
locations:
- none

Container Console setup

LXD containers are created with a special `/dev` where there is only one tty console `/dev/console` that could be used to see bootstrap phase or to enter in the container from the container console.

This could be done in two way:

  • on launch phase:
root # lxc launch -p default -p net macaroni:funtoo/next-stage3 testcontainer -e   --console
  • through the `lxc console` command:
root # lxc console testcontainer

To use correctly this feature is needed setup correctly the `/etc/inittab` file and disable the standard c* entries:

root # sed -i /etc/inittab -e 's|^#x1|x1|g' -e '/^c[0-9].*/d'

The images exposed over the Simplestreams Server of Funtoo Macaroni are already configured correctly and ready to use.

Finishing Steps

Assuming your network is now working, you are ready to start using your new Funtoo container. Time to have some fun! Go ahead and run ego sync and then emerge your favorite things:

testcontainer # ego sync
Syncing meta-repo
Cloning into '/var/git/meta-repo'...