Difference between revisions of "LXD"

From Funtoo
Jump to: navigation, search
(Our first image)
(Network Troubleshooting)
 
(173 intermediate revisions by 5 users not shown)
Line 1: Line 1:
LXD is a container "hypervisor" it should provide user with a new and fresh experience using  [[LXC]] technology.
+
<languages/>
 +
{{Subpages|GPU Acceleration,GPU Acceleration (NVIDIA),What are subuids and subgids?,Administration Tutorial,Features and Concepts}}
 +
<translate>
 +
== Introduction == <!--T:1-->
  
LXD consists of three components:
+
<!--T:2-->
+
LXD is a container "hypervisor" designed to provide an easy set of tools to manage Linux containers, and its development is currently being led by employees at Canonical. You can learn more about the project in general at https://linuxcontainers.org/lxd/ .
* A system-wide daemon (lxd)
 
* A command line client (lxc)
 
* An OpenStack Nova plugin (nova-compute-lxd)
 
  
A REST API that is accesible both locally and if enabled, over the network is provided from the lxd daemon.
+
<!--T:3-->
 +
LXD is currently used for container infrastructure for [[Special:MyLanguage/Funtoo Containers|Funtoo Containers]] and is also very well-supported under Funtoo Linux. For this reason, it's recommended that you check out LXD and see what it can do for you.
  
The command line tool is designed to be a very simple, yet very powerful tool to manage all your containers. It can handle connections to multiple container hosts and easily give you an overview of all the containers on your network, let you create some more where you want them and even move them around while they're running.
+
== Basic Setup on Funtoo == <!--T:4-->
  
The OpenStack plugin then allows you to use your lxd hosts as compute nodes, running workloads on containers rather than virtual machines.
+
<!--T:5-->
 +
The following steps will show you how to set up a basic LXD environment under Funtoo Linux. This environment will essentially use the default LXD setup -- a will be created called {{c|lxdbr0}} which will use NAT to provide Internet access to your containers. In addition, a default storage pool will be created that will simply use your existing filesystem's storage, creating a directory at {{f|/var/lib/lxd/storage-pools/default}} to store any containers you create. More sophisticated configurations are possible that use dedicated network bridges connected to physical interfaces without NAT, as well as dedicated storage pools that use [[Special:MyLanguage/ZFS|ZFS]] and [[Special:MyLanguage/btrfs|btrfs]] -- however, these types of configurations are generally overkill for a developer workstation and should only be attempted by advanced users. So we won't cover them here.
  
The LXD project was founded and is currently led by Canonical Ltd and Ubuntu with contributions from a range of other companies and individual contributors.
+
=== Requirements === <!--T:6-->
  
__TOC__
+
This section will guide you through setting up the basic requirements for creating an LXD environment.
  
== Features ==
+
<!--T:8-->
Some of the biggest features of LXD are:
+
The first step is to emerge LXD and its dependencies. Perform the following:
  
* Secure by design (unprivileged containers, resource restrictions and much more)
+
</translate>
* Scalable (from containers on your laptop to thousand of compute nodes)
+
{{console|body=
* Intuitive (simple, clear API and crisp command line experience)
+
# ##i##emerge -a lxd
* Image based (no more distribution templates, only good, trusted images)
+
}}
* Live migration
+
<translate>
 +
 
 +
<!--T:9-->
 +
Once LXD is done emerging, we will want to enable it to start by default:
  
== Relationship with LXC ==
+
</translate>
LXD isn't a rewrite of LXC, in fact it's building on top of LXC to provide a new, better user experience. Under the hood, LXD uses LXC through liblxc and its Go binding
+
{{console|body=
to create and manage the containers.
+
# ##i##rc-update add lxd default
 +
}}
 +
<translate>
  
It's basically an alternative to LXC's tools and distribution template system with the added features that come from being controllable over the network.
+
<!--T:10-->
 +
In addition, we will want to set up the following files. {{f|/etc/security/limits.conf}} should be modified to have the following lines in it:
  
== Licensing ==
+
</translate>
LXD is free software and is developed under the Apache 2 license.
+
{{file|name=/etc/security/limits.conf|body=
 +
*      soft    nofile  1048576
 +
*      hard    nofile  1048576
 +
root    soft    nofile  1048576
 +
root    hard    nofile  1048576
 +
*      soft    memlock unlimited
 +
*      hard    memlock unlimited
 +
# End of file
 +
}}
 +
<translate>
  
== Installing LXD in Funtoo ==
+
<!--T:11-->
=== Kernel pre-requisities ===
+
In addition, we will want to map a set of user ids and group ids to the root user so they are available for its use. Do this by creating the {{f|/etc/subuid}} and {{f|/etc/subgid}} files with the following identical contents:
These options should be enable in your kernel to use all of the functions of LXD:
 
<code>
 
  !GRKERNSEC_CHROOT_CAPS
 
  !GRKERNSEC_CHROOT_CHMOD
 
  !GRKERNSEC_CHROOT_DOUBLE
 
  !GRKERNSEC_CHROOT_MOUNT
 
  !GRKERNSEC_CHROOT_PIVOT
 
  !GRKERNSEC_PROC
 
  !GRKERNSEC_SYSFS_RESTRICT
 
  !NETPRIO_CGROUP
 
  BRIDGE
 
  CGROUP_CPUACCT
 
  CGROUP_DEVICE
 
  CGROUP_FREEZER
 
  CGROUP_SCHED
 
  CGROUPS
 
  CHECKPOINT_RESTORE
 
  CPUSETS
 
  DEVPTS_MULTIPLE_INSTANCES
 
  DUMMY
 
  EPOLL
 
  EVENTFD
 
  FHANDLE
 
  IA32_EMULATION
 
  INET_DIAG
 
  INET_TCP_DIAG
 
  INET_UDP_DIAG
 
  INOTIFY_USER
 
  IP_NF_NAT
 
  IP_NF_TARGET_MASQUERADE
 
  IP6_NF_NAT
 
  IP6_NF_TARGET_MASQUERADE
 
  IPC_NS
 
  IPV6
 
  MACVLAN
 
  NAMESPACES
 
  NET_IPGRE
 
  NET_IPGRE_DEMUX
 
  NET_IPIP
 
  NET_NS
 
  NETFILTER_XT_MATCH_COMMENT
 
  NETLINK_DIAG
 
  NF_NAT_MASQUERADE_IPV4
 
  NF_NAT_MASQUERADE_IPV6
 
  PACKET_DIAG
 
  PID_NS
 
  POSIX_MQUEUE
 
  UNIX_DIAG
 
  USER_NS
 
  UTS_NS
 
  VETH
 
  VXLAN
 
</code>
 
  
=== Getting LXD ===
+
</translate>
Installing LXD is pretty straight forward as the ebuild exists in our portage tree. I would recommend putting /var on btrfs or zfs (or at least /var/lib/lxd) as LXD can take advantage of these COW filesytems. LXD doesn’t need any configuration to use btrfs, you just need to make sure that /var/lib/lxd is stored on a btrfs filesystem and LXD will automatically make use of it for you.
+
{{file|name=/etc/subuid|body=
 +
root:100000:1000000000
 +
}}
 +
<translate>
  
{{console|body=
+
</translate>
###i## emerge -av lxd
+
{{file|name=/etc/subgid|body=
 +
root:100000:1000000000
 +
}}
 +
<translate>
  
These are the packages that would be merged, in order:
+
<!--T:12-->
 +
At this point we are ready to initialize and start LXD.
  
Calculating dependencies... done!
+
=== Initialization === <!--T:13-->
[ebuild  N    ] dev-lang/go-1.8-r1:0/1.8::gentoo  USE="-gccgo" 69,062 KiB
 
[ebuild  N    ] dev-go/go-crypto-0_pre20160126:0/0_pre20160126::gentoo  881 KiB
 
[ebuild  N    ] sys-fs/squashfs-tools-4.3-r2::gentoo  USE="xattr xz -debug -lz4 -lzma -lzo -static" 194 KiB
 
[ebuild  N    ] sys-libs/libseccomp-2.3.2::gentoo  USE="-static-libs" 547 KiB
 
[ebuild  N    ] net-libs/libnet-1.2_rc3-r1:1.1::gentoo  USE="-doc -static-libs" 661 KiB
 
[ebuild  N    ] dev-libs/libnl-3.3.0_rc1:3::gentoo  USE="python -static-libs -utils" PYTHON_TARGETS="python2_7 python3_4 -python3_5" 912 KiB
 
[ebuild  N    ] dev-python/ipaddr-2.1.11-r1::gentoo  PYTHON_TARGETS="python2_7 python3_4 -pypy -python3_5" 29 KiB
 
[ebuild  N    ] dev-go/go-text-0_pre20160211:0/0_pre20160211::gentoo  3,922 KiB
 
[ebuild  N    ] sys-libs/libcap-2.25::gentoo  USE="pam -static-libs" 63 KiB
 
[ebuild  N    ] dev-go/go-net-0_pre20160216:0/0_pre20160216::gentoo  724 KiB
 
[ebuild  N    ] net-dns/dnsmasq-2.76-r1::gentoo  USE="dhcp inotify ipv6 nls -auth-dns -conntrack -dbus -dhcp-tools -dnssec -idn -lua -script (-selinux) -static -tftp" LINGUAS="-de -es -fi -fr -id -it -no -pl -pt_BR -ro" 470 KiB
 
[ebuild  N    ] dev-libs/protobuf-c-1.2.1-r1:0/1.0.0::gentoo  USE="-static-libs {-test}" 448 KiB
 
[ebuild  N    ] sys-process/criu-2.12::gentoo  USE="python -setproctitle" PYTHON_TARGETS="python2_7" 632 KiB
 
[ebuild  N    ] app-emulation/lxc-2.0.7::gentoo  USE="python seccomp -cgmanager -doc -examples -lua" PYTHON_TARGETS="python3_4 -python3_5" 774 KiB
 
[ebuild  N    ] app-emulation/lxd-2.11::gentoo  USE="daemon nls {-test}" LINGUAS="-de -el -fr -ja -nl -ru" 2,352 KiB
 
  
Total: 15 packages (15 new), Size of downloads: 81,664 KiB
+
<!--T:14-->
 +
To configure LXD, first we will need to start LXD. This can be done as follows:
  
Would you like to add these changes to your config files? [Yes/No]
+
</translate>
 +
{{console|body=
 +
# ##i##/etc/init.d/lxd start
 
}}
 
}}
 +
<translate>
  
=== Running LXD ===
+
<!--T:15-->
Once installed you need to start the LXD daemon. By running:
+
At this point, we can run {{c|lxd init}} to run a configuration wizard to set up LXD:
  
 +
</translate>
 
{{console|body=
 
{{console|body=
###i## service lxd start
+
# ##i##lxd init
* Starting lxd server ...
+
Would you like to use LXD clustering? (yes/no) [default=no]: ##i##↵
 +
Do you want to configure a new storage pool? (yes/no) [default=yes]: ##i##↵
 +
Name of the new storage pool [default=default]: ##i##↵
 +
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: ##i##dir ↵
 +
Would you like to connect to a MAAS server? (yes/no) [default=no]: ##i##↵
 +
Would you like to create a new local network bridge? (yes/no) [default=yes]: ##i##↵
 +
What should the new bridge be called? [default=lxdbr0]: ##i##↵
 +
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: ##i##↵
 +
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: ##i##none ↵
 +
Would you like LXD to be available over the network? (yes/no) [default=no]: ##i##↵
 +
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] ##i##↵
 +
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: ##i##↵
 +
#
 
}}
 
}}
 +
<translate>
 +
 +
<!--T:16-->
 +
As you can see, we chose all the default ''except'' for:
 +
;storage pool: We opted for using a directory-based container storage rather than [[Special:MyLanguage/btrfs|btrfs]] volumes.  Directory-based may be the default option during LXD configuration -- it depends if you have btrfs-tools installed or not.
 +
;IPv6 address: It is recommended you turn this off unless you are specifically wanting to play with IPv6 in your containers. It may cause {{c|dhcpcd}} in your container to only retrieve an IPv6 address if you leave it enabled. This is great if you have IPv6 working -- otherwise, you'll get a dud IPv6 address and no IPv4 address, and thus no network.
 +
 +
{{Warning|As explained above, turn off IPv6 NAT in LXD unless you specifically intend to use it! It can confuse {{c|dhcpcd}}.}}
  
== First setup of LXD/Initialisation ==
+
Now, we should be able to run {{c|lxc image list}} and get a response from the LXD daemon:
Before using LXD for the first time as a user, you may initialize your LXD environment. As recommended earlier I am using btrfs for this installation.
 
  
 +
</translate>
 
{{console|body=
 
{{console|body=
$##i## lxd init
+
# ##i##lxc image list
Do you want to configure a new storage pool (yes/no) [default=yes]?
+
+-------+-------------+--------+-------------+------+------+-------------+
Name of the new storage pool [default=default]:
+
{{!}} ALIAS {{!}} FINGERPRINT {{!}} PUBLIC {{!}} DESCRIPTION {{!}} ARCH {{!}} SIZE {{!}} UPLOAD DATE {{!}}
Name of the storage backend to use (dir, btrfs, lvm) [default=dir]: btrfs
+
+-------+-------------+--------+-------------+------+------+-------------+
Create a new BTRFS pool (yes/no) [default=yes]?
+
#
Would you like to use an existing block device (yes/no) [default=no]?
 
Would you like to create a new subvolume for the BTRFS storage pool (yes/no) [default=yes]:
 
Would you like LXD to be available over the network (yes/no) [default=no]?
 
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
 
Would you like to create a new network bridge (yes/no) [default=yes]?
 
What should the new bridge be called [default=lxdbr0]?
 
What IPv4 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?
 
What IPv6 address should be used (CIDR subnet notation, “auto” or “none”) [default=auto]?
 
LXD has been successfully configured.
 
 
}}
 
}}
 +
<translate>
 +
<!--T:18-->
 +
If you are able to do this, you have successfully set up the core parts of LXD! Note that we used the command {{c|lxc}} and not {{c|lxd}} like we did for {{c|lxd init}} -- from this point forward, you will use the {{c|lxc}} command. Don't let this
 +
confuse you -- the {{c|lxc}} command is the primary command-line tool for working with LXD containers.
 +
 +
<!--T:19-->
 +
Above, you can see that no images are installed. Images are installable snapshots of containers that we can use to create new containers ourselves. So, as a first step, let's go ahead and grab an image we can use. You will want to browse https://build.funtoo.org for an LXD image that will work on your computer hardware. For example, I was able to download
 +
the following file using {{c|wget}}:
  
What this does is it creates btrfs subvolumes like this:
+
</translate>
 
{{console|body=
 
{{console|body=
$##i## btrfs sub list .
+
# ##i##wget https://build.funtoo.org/1.3-release-std/x86-64bit/intel64-skylake/lxd-intel64-skylake-1.3-release-std-2019-06-11.tar.xz
ID 260 gen 1047 top level 5 path rootfs
 
ID 280 gen 1046 top level 260 path var/lib/lxd/storage-pools/default
 
ID 281 gen 1043 top level 280 path var/lib/lxd/storage-pools/default/containers
 
ID 282 gen 1044 top level 280 path var/lib/lxd/storage-pools/default/snapshots
 
ID 283 gen 1045 top level 280 path var/lib/lxd/storage-pools/default/images
 
ID 284 gen 1046 top level 280 path var/lib/lxd/storage-pools/default/custom
 
 
}}
 
}}
 +
<translate>
  
It also creates new network interface for you:
+
<!--T:20-->
 +
Once downloaded, this image can be installed using the following command:
 +
</translate>
 
{{console|body=
 
{{console|body=
$##i## ip a list dev lxdbr0
+
# ##i##lxc image import lxd-intel64-skylake-1.3-release-std-2019-06-11.tar.xz --alias funtoo
8: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
+
Image imported with fingerprint: fe4d27fb31bfaf3bd4f470e0ea43d26a6c05991de2a504b9e0a3b1a266dddc69
    link/ether d2:9b:70:f2:8f:6f brd ff:ff:ff:ff:ff:ff
 
    inet 10.250.237.1/24 scope global lxdbr0
 
      valid_lft forever preferred_lft forever
 
    inet 169.254.59.23/16 brd 169.254.255.255 scope global lxdbr0
 
      valid_lft forever preferred_lft forever
 
    inet6 fd42:efd8:662e:3184::1/64 scope global
 
      valid_lft forever preferred_lft forever
 
    inet6 fe80::caf5:b7ed:445e:b112/64 scope link
 
      valid_lft forever preferred_lft forever
 
 
}}
 
}}
 +
<translate>
 +
 +
<!--T:21-->
 +
Now you will see the image available in our image list:
  
And last but not least it also generates iptables rules for you:
+
</translate>
 
{{console|body=
 
{{console|body=
$##i## iptables -L
+
# ##i##lxc image list
Chain INPUT (policy ACCEPT)
+
+--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+
target    prot opt source              destination
+
{{!}} ALIAS  {{!}} FINGERPRINT  {{!}} PUBLIC {{!}}                DESCRIPTION                {{!}}  ARCH  {{!}}  SIZE  {{!}}        UPLOAD DATE          {{!}}
ACCEPT     tcp  -- anywhere            anywhere            tcp dpt:domain /* generated for LXD network lxdbr0 */
+
+--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+
ACCEPT    udp  -- anywhere            anywhere            udp dpt:domain /* generated for LXD network lxdbr0 */
+
{{!}} funtoo {{!}} fe4d27fb31bf {{!}} no     {{!}} 1.3 Release Skylake 64bit [std] 2019-06-14 {{!}} x86_64 {{!}} 279.35MB {{!}} Jun 15, 2019 at 3:09am (UTC) {{!}}
ACCEPT    udp  -- anywhere            anywhere            udp dpt:bootps /* generated for LXD network lxdbr0 */
+
+--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+
 +
#
 +
}}
 +
<translate>
 +
 
 +
=== First Container === <!--T:24-->
  
Chain FORWARD (policy ACCEPT)
+
<!--T:25-->
target    prot opt source              destination
+
It is now time to launch our first container. This can be done as follows:
ACCEPT    all  -- anywhere            anywhere            /* generated for LXD network lxdbr0 */
 
ACCEPT    all  -- anywhere            anywhere            /* generated for LXD network lxdbr0 */
 
  
Chain OUTPUT (policy ACCEPT)
+
</translate>
target    prot opt source              destination
+
{{console|body=
ACCEPT    tcp  --  anywhere            anywhere            tcp spt:domain /* generated for LXD network lxdbr0 */
+
# ##i##lxc launch funtoo testcontainer
ACCEPT    udp  --  anywhere            anywhere            udp spt:domain /* generated for LXD network lxdbr0 */
+
Creating testcontainer
ACCEPT    udp  --  anywhere            anywhere            udp spt:bootps /* generated for LXD network lxdbr0 */
+
Starting testcontainer
 +
}}
 +
<translate>
  
$##i## iptables -L -t nat
+
<!--T:26-->
Chain PREROUTING (policy ACCEPT)
+
We can now see the container running via {{c|lxc list}}:
target    prot opt source              destination
 
  
Chain INPUT (policy ACCEPT)
+
</translate>
target    prot opt source              destination
+
{{console|body=
 +
# ##i##lxc list
 +
+---------------+---------+------+-----------------------------------------------+------------+-----------+
 +
{{!}} NAME          {{!}}  STATE  {{!}} IPV4 {{!}}                    IPV6                      {{!}}    TYPE    {{!}} SNAPSHOTS {{!}}
 +
+---------------+---------+------+-----------------------------------------------+------------+-----------+
 +
{{!}} testcontainer {{!}} RUNNING {{!}}      {{!}} fd42:8063:81cb:988c:216:3eff:fe2a:f901 (eth0) {{!}} PERSISTENT {{!}}          {{!}}
 +
+---------------+---------+------+-----------------------------------------------+------------+-----------+
 +
#
 +
}}
 +
<translate>
 +
<!--T:29-->
 +
By default, our new container {{c|testcontainer}} will use the default profile, which will connect an {{c|eth0}} interface in the container to NAT, and will also use our directory-based LXD storage pool. We can now enter the container as follows:
  
Chain OUTPUT (policy ACCEPT)
+
</translate>
target    prot opt source              destination
+
{{console|body=
 +
# ##i##lxc exec testcontainer -- su --login
 +
%testcontainer%
 +
}}
 +
<translate>
  
Chain POSTROUTING (policy ACCEPT)
+
<!--T:30-->
target    prot opt source              destination
+
As you might have noticed, we do not yet have any IPv4 networking configured. While LXD has set up a bridge and NAT for us, along with a DHCP server to query, we actually need to use {{c|dhcpcd}} to query for an IP address, so let's get that set up:
MASQUERADE  all  -- 10.250.237.0/24    !10.250.237.0/24      /* generated for LXD network lxdbr0 */
 
  
$##i## iptables -L -t mangle
+
</translate>
Chain PREROUTING (policy ACCEPT)
+
{{console|body=
target    prot opt source              destination
+
%testcontainer% ##i##echo "template=dhcpcd" > /etc/conf.d/netif.eth0
 +
%testcontainer% ##i##cd /etc/init.d
 +
%testcontainer% ##i##ln -s netif.tmpl netif.eth0
 +
%testcontainer% ##i##rc-update add netif.eth0 default
 +
* service netif.eth0 added to runlevel default
 +
%testcontainer% ##i##rc
 +
* rc is deprecated, please use openrc instead.
 +
* Caching service dependencies ...                            [ ##g##ok ##!g##]
 +
* Starting DHCP Client Daemon ...                              [ ##g##ok ##!g##]
 +
* Network dhcpcd eth0 up ...                                  [ ##g##ok ##!g##]
 +
%testcontainer% ##i##
 +
}}
 +
<translate>
  
Chain INPUT (policy ACCEPT)
+
<!--T:31-->
target    prot opt source              destination
+
You can now see that {{c|eth0}} has a valid IPv4 address:
  
Chain FORWARD (policy ACCEPT)
+
</translate>
target    prot opt source              destination
+
{{console|body=
 +
%testcontainer% ##i##ifconfig
 +
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
 +
        inet 10.212.194.17  netmask 255.255.255.0  broadcast 10.212.194.255
 +
        inet6 fd42:8063:81cb:988c:25ea:b5bd:603d:8b0d  prefixlen 64  scopeid 0x0<global>
 +
        inet6 fe80::216:3eff:fe2a:f901  prefixlen 64  scopeid 0x20<link>
 +
        ether 00:16:3e:2a:f9:01  txqueuelen 1000  (Ethernet)
 +
        RX packets 45  bytes 5385 (5.2 KiB)
 +
        RX errors 0  dropped 0  overruns 0  frame 0
 +
        TX packets 20  bytes 2232 (2.1 KiB)
 +
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 +
}}
 +
<translate>
  
Chain OUTPUT (policy ACCEPT)
+
What happened is that LXD set up a DHCP server for us (dnsmasq) running on our private container network, and automatically offers IP addresses to our containers. It also configured iptables for us to NAT the connection so that outbound Internet access should magically work.
target    prot opt source              destination
+
You should also be able to see this IPv4 address listed in the container list when you type {{c|lxc list}} on your host system.
  
Chain POSTROUTING (policy ACCEPT)
+
=== Network Troubleshooting ===
target    prot opt source              destination
 
CHECKSUM  udp  --  anywhere            anywhere            udp dpt:bootpc /* generated for LXD network lxdbr0 */ CHECKSUM fill
 
}}
 
  
Some other things done by the initialization and starting of the LXD daemon are:  
+
Note that if you are having issues with your container getting an IPv4 address via DHCP, make sure that you turn IPv6 off in LXD. Do this
* dnsmasq listening on lxdbr0
+
by running:
*
 
  
== Finishing up the setup of LXD ==
+
{{console|body=
There are still some things that you need to do manually. We need to setup subuid and subgid values for our containers to use. And for using non-systemd containers we will also need app-admin/cgmanager so emerge and start it now.
+
###i## lxc network edit lxdbr0
 +
}}
  
== Containers, snapshots and images ==
+
Then, change {{c|ipv6.nat}} to {{c|"false"}} and restart lxd and the container:
'''Containers''' in LXD are made of:
 
* A filesystem (rootfs)
 
* A list of configuration options, including resource limits, environment, security options and more
 
* A bunch of devices like disks, character/block unix devices and network interfaces
 
* A set of profiles the container inherits configuration from (see below)
 
* Some properties (container architecture, ephemeral or persistent and the name)
 
* Some runtime state (when using CRIU for checkpoint/restore)
 
  
Container '''snapshots''' as the name states snapshots of the container in time and cannot be modified in any way. It is worth noting that because snapshots can store the container runtime state, which gives us ability of “stateful” snapshots. That is, the ability to rollback the container including its cpu and memory state at the time of the snapshot.
+
{{console|body=
 +
###i## /etc/init.d/lxd restart
 +
###i## lxc restart testcontainer
 +
}}
  
LXD is '''image''' based, all LXD containers come from an image. Images are typically clean Linux distribution images similar to what you would use for a virtual machine or cloud instance. It is possible to “publish” a container, making an image from it which can then be used by the local or remote LXD hosts.
+
This should resolve the issue.
  
=== Our first image ===
+
=== Finishing Steps ===
Let's get our hand even more dirty and create our first image. We will be using a generic 64 bit Funtoo Linux image. Let's grab it, because we will need to modify it a little bit.
 
  
 +
<!--T:32-->
 +
Assuming your network is now working, you are ready to start using your new Funtoo container. Time to have some fun! Go ahead and run {{c|ego sync}} and then emerge your favorite things:
 +
</translate>
 
{{console|body=
 
{{console|body=
###i## mkdir lxd-images
+
%testcontainer% ##i##ego sync
###i## cd lxd-images
+
\##g##Syncing meta-repo
###i## wget http://build.funtoo.org/funtoo-current/pure64/generic_64-pure64/stage3-latest.tar.xz
+
Cloning into '/var/git/meta-repo'...
###i##
+
 
 
}}
 
}}
 +
<translate>
 +
<!--T:121-->
 +
[[Category:Containers]]
 +
[[Category:LXD]]
 +
[[Category:Official Documentation]]
 +
</translate>

Latest revision as of 00:59, October 22, 2019

Other languages:
English • ‎português do Brasil

Introduction

LXD is a container "hypervisor" designed to provide an easy set of tools to manage Linux containers, and its development is currently being led by employees at Canonical. You can learn more about the project in general at https://linuxcontainers.org/lxd/ .

LXD is currently used for container infrastructure for Funtoo Containers and is also very well-supported under Funtoo Linux. For this reason, it's recommended that you check out LXD and see what it can do for you.

Basic Setup on Funtoo

The following steps will show you how to set up a basic LXD environment under Funtoo Linux. This environment will essentially use the default LXD setup -- a will be created called lxdbr0 which will use NAT to provide Internet access to your containers. In addition, a default storage pool will be created that will simply use your existing filesystem's storage, creating a directory at /var/lib/lxd/storage-pools/default to store any containers you create. More sophisticated configurations are possible that use dedicated network bridges connected to physical interfaces without NAT, as well as dedicated storage pools that use ZFS and btrfs -- however, these types of configurations are generally overkill for a developer workstation and should only be attempted by advanced users. So we won't cover them here.

Requirements

This section will guide you through setting up the basic requirements for creating an LXD environment.

The first step is to emerge LXD and its dependencies. Perform the following:

root # emerge -a lxd

Once LXD is done emerging, we will want to enable it to start by default:

root # rc-update add lxd default

In addition, we will want to set up the following files. /etc/security/limits.conf should be modified to have the following lines in it:

   /etc/security/limits.conf
*       soft    nofile  1048576
*       hard    nofile  1048576
root    soft    nofile  1048576
root    hard    nofile  1048576
*       soft    memlock unlimited
*       hard    memlock unlimited
# End of file

In addition, we will want to map a set of user ids and group ids to the root user so they are available for its use. Do this by creating the /etc/subuid and /etc/subgid files with the following identical contents:

   /etc/subuid
root:100000:1000000000
   /etc/subgid
root:100000:1000000000

At this point we are ready to initialize and start LXD.

Initialization

To configure LXD, first we will need to start LXD. This can be done as follows:

root # /etc/init.d/lxd start

At this point, we can run lxd init to run a configuration wizard to set up LXD:

root # lxd init
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]: dir ↵
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: none ↵
Would you like LXD to be available over the network? (yes/no) [default=no]: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: 
root #

As you can see, we chose all the default except for:

storage pool
We opted for using a directory-based container storage rather than btrfs volumes. Directory-based may be the default option during LXD configuration -- it depends if you have btrfs-tools installed or not.
IPv6 address
It is recommended you turn this off unless you are specifically wanting to play with IPv6 in your containers. It may cause dhcpcd in your container to only retrieve an IPv6 address if you leave it enabled. This is great if you have IPv6 working -- otherwise, you'll get a dud IPv6 address and no IPv4 address, and thus no network.
   Warning

As explained above, turn off IPv6 NAT in LXD unless you specifically intend to use it! It can confuse dhcpcd.

Now, we should be able to run lxc image list and get a response from the LXD daemon:

root # lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
root #

If you are able to do this, you have successfully set up the core parts of LXD! Note that we used the command lxc and not lxd like we did for lxd init -- from this point forward, you will use the lxc command. Don't let this confuse you -- the lxc command is the primary command-line tool for working with LXD containers.

Above, you can see that no images are installed. Images are installable snapshots of containers that we can use to create new containers ourselves. So, as a first step, let's go ahead and grab an image we can use. You will want to browse https://build.funtoo.org for an LXD image that will work on your computer hardware. For example, I was able to download the following file using wget:

root # wget https://build.funtoo.org/1.3-release-std/x86-64bit/intel64-skylake/lxd-intel64-skylake-1.3-release-std-2019-06-11.tar.xz

Once downloaded, this image can be installed using the following command:

root # lxc image import lxd-intel64-skylake-1.3-release-std-2019-06-11.tar.xz --alias funtoo
Image imported with fingerprint: fe4d27fb31bfaf3bd4f470e0ea43d26a6c05991de2a504b9e0a3b1a266dddc69

Now you will see the image available in our image list:

root # lxc image list
+--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+
| ALIAS  | FINGERPRINT  | PUBLIC |                DESCRIPTION                 |  ARCH  |   SIZE   |         UPLOAD DATE          |
+--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+
| funtoo | fe4d27fb31bf | no     | 1.3 Release Skylake 64bit [std] 2019-06-14 | x86_64 | 279.35MB | Jun 15, 2019 at 3:09am (UTC) |
+--------+--------------+--------+--------------------------------------------+--------+----------+------------------------------+
root #

First Container

It is now time to launch our first container. This can be done as follows:

root # lxc launch funtoo testcontainer
Creating testcontainer
Starting testcontainer

We can now see the container running via lxc list:

root # lxc list
+---------------+---------+------+-----------------------------------------------+------------+-----------+
| NAME          |  STATE  | IPV4 |                     IPV6                      |    TYPE    | SNAPSHOTS |
+---------------+---------+------+-----------------------------------------------+------------+-----------+
| testcontainer | RUNNING |      | fd42:8063:81cb:988c:216:3eff:fe2a:f901 (eth0) | PERSISTENT |           |
+---------------+---------+------+-----------------------------------------------+------------+-----------+
root #

By default, our new container testcontainer will use the default profile, which will connect an eth0 interface in the container to NAT, and will also use our directory-based LXD storage pool. We can now enter the container as follows:

root # lxc exec testcontainer -- su --login
testcontainer #

As you might have noticed, we do not yet have any IPv4 networking configured. While LXD has set up a bridge and NAT for us, along with a DHCP server to query, we actually need to use dhcpcd to query for an IP address, so let's get that set up:

testcontainer # echo "template=dhcpcd" > /etc/conf.d/netif.eth0
testcontainer # cd /etc/init.d
testcontainer # ln -s netif.tmpl netif.eth0
testcontainer # rc-update add netif.eth0 default
 * service netif.eth0 added to runlevel default
testcontainer # rc
 * rc is deprecated, please use openrc instead.
 * Caching service dependencies ...                             [ ok ]
 * Starting DHCP Client Daemon ...                              [ ok ]
 * Network dhcpcd eth0 up ...                                   [ ok ]
testcontainer # 

You can now see that eth0 has a valid IPv4 address:

testcontainer # ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.212.194.17  netmask 255.255.255.0  broadcast 10.212.194.255
        inet6 fd42:8063:81cb:988c:25ea:b5bd:603d:8b0d  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::216:3eff:fe2a:f901  prefixlen 64  scopeid 0x20<link>
        ether 00:16:3e:2a:f9:01  txqueuelen 1000  (Ethernet)
        RX packets 45  bytes 5385 (5.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 2232 (2.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

What happened is that LXD set up a DHCP server for us (dnsmasq) running on our private container network, and automatically offers IP addresses to our containers. It also configured iptables for us to NAT the connection so that outbound Internet access should magically work. You should also be able to see this IPv4 address listed in the container list when you type lxc list on your host system.

Network Troubleshooting

Note that if you are having issues with your container getting an IPv4 address via DHCP, make sure that you turn IPv6 off in LXD. Do this by running:

root # lxc network edit lxdbr0

Then, change ipv6.nat to "false" and restart lxd and the container:

root # /etc/init.d/lxd restart
root # lxc restart testcontainer

This should resolve the issue.

Finishing Steps

Assuming your network is now working, you are ready to start using your new Funtoo container. Time to have some fun! Go ahead and run ego sync and then emerge your favorite things:

testcontainer # ego sync
Syncing meta-repo
Cloning into '/var/git/meta-repo'...