Difference between revisions of "Openstack"

From Funtoo
Jump to navigation Jump to search
(updated to include nova.)
(the server uses nova...not the guest.)
Line 45: Line 45:
* the VM should consist of a single partition.  most cloud providers back their hosts with an object store, so the concept of partitions is quite inapplicable anyhow.
* the VM should consist of a single partition.  most cloud providers back their hosts with an object store, so the concept of partitions is quite inapplicable anyhow.
* the kernel for the VM '''must include VIRTIO drivers.'''  This goes without saying, otherwise nothing will find the disks or network.
* the kernel for the VM '''must include VIRTIO drivers.'''  This goes without saying, otherwise nothing will find the disks or network.
* openstack uses Nova for VNC purposes.  this package must be installed if there is to be a console to the VM as it provides noVNC connectivity.
===wrapping things up===
===wrapping things up===
once your VM is completely built, it needs to be sparsified for efficiency while transferring it to a cloud provider.   
once your VM is completely built, it needs to be sparsified for efficiency while transferring it to a cloud provider.   

Revision as of 16:09, February 14, 2017

How to create a Funtoo image for openstack

   Warning

openstack providers typically bill their customers in CPU time. while some providers cap this amount per billing period, many do not. pay close attention to your terms of service and billing agreement before using Funtoo in any openstack cloud environment. portage can incur significant billing--Nimbius (talk) 15:45, February 12, 2017 (MST)

openstack and the cloud

Openstack is a popular, powerful open-source virtualisation environment supported by numerous providers. In openstack, machines are provisioned from images that are pre-built and uploaded to the hosting provider. these images can have certain artifacts such as their CPU and RAM increased or decreased, as well as their total allocable disk space. Openstack hosting providers differ in several key ways from traditional shared, or KVM hosting providers in that:

  • openstack images are expected to arrive pre-built. you cannot build a gentoo image from system rescue cd at the providers panel.
  • static assignment of IP addresses is often not supported, however the IP your host receives by DHCP generally does not change
  • resouces such as extra disks, nics, switchports, and even routers can be granularly controlled with some providers.
  • VIRTIO drivers replace conventional hardware.
  • EFI is generally not supported, although openstack provides this support should you choose to create your own image.

creating the build environment

the following packages should be emerged to create a KVM guest.

emerge libvirt virt-manager virt-viewer libguestfs

pay close attention to the build flags for libguestfs, as it requires caml support in order to emerge the full toolset and will silently omit packages without it.

configure your kernel to support VIRTIO devices. many references are vague as to the specific packages and encourage selecting all of them. search for these packages in menuconfig to determine which ones youll use, and which you do not need. debug logging from a KVM guest generates substantial overhead. The following are minimum configured VIRTIO drivers to ensure a successful boot.

CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_INPUT=y
CONFIG_VIRTIO_MMIO=y
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y

creating the VM

KVM images are comprised of pools, volumes, and domains (virtual machines.) a pool is designated storage for a group of volumes, and volumes are images containing a virtual machine and its configuration. to begin, select a suitable location for your pool and, inside a virsh session, run:

virsh # pool-define-as poolname dir - - - - /home/username/.local/libvirt/images
pool-build
pool-start
pool-autostart

this creates a pool for storage in /home/username under the .local directory. the only boundary for this pool is the size of the mount point, so be careful. Continue, and create the volume to be used in the pool for your vm:

virsh # vol-create-as      poolname volumename 10GiB --format qcow2

remember: your volume format must match your providers supported formats. Ex: a qcow2 volume will not execute in a RAW volume provider. You're now ready to create a VM. exit the virsh session and execute:

virt-install --name funtoo-test --memory 2048 --vcpus=4 --cpu host --cdrom /usr/share/systemrescuecd/systemrescuecd-x86-newest.iso  --disk vol=poolname/volumename --graphics=vnc --os-variant=virtio26 --network bridge=virbr0

with this command the VM funtoo-test is created. you'll need to specify the location of a systemrescuecd to boot. Remember: images must arrive at the hosting provider pre-built, but we can use a cdrom boot to build any images we need. youll also need to specify the network bridge to use in order to pull down stages and the portage tree.

Installing Funtoo

a VNC console with your booted systemrescueCD should be loaded. begin installing Funtoo normally, with the following exceptions

  • the VM should consist of a single partition. most cloud providers back their hosts with an object store, so the concept of partitions is quite inapplicable anyhow.
  • the kernel for the VM must include VIRTIO drivers. This goes without saying, otherwise nothing will find the disks or network.

wrapping things up

once your VM is completely built, it needs to be sparsified for efficiency while transferring it to a cloud provider.

virt-sparsify volumename volumename-shrunk.img

this will reduce the total footprint of your VM to around 5 gigabytes or less. if you convert your VM image, be sure to take this into account in virsh by editing the domain and changing the format or else the VM will not boot. You can test your vm by editing its configuration and instead, configuring the volume as volumename-shrunk.img.

automatic resizing with initramfs

in order for a providers profiles to take effect against your image, you'll need to allow for expanding the vda partition you created in the installation step. It is this authors opinion that such resizing should not take place without the consent of the administrator, and as such should be completed in ansible, chef, or other configuration management system where the result and action can be properly audited. Remember: this configuration cannot autoresize its partitions based on the openstack configuration that is applied to it. whatever the slider on the website says for disk is not applied until you grow the partition. the job is up to you!