Difference between pages "Xen" and "OpenStack Architecture"

From Funtoo
(Difference between pages)
Jump to: navigation, search
 
 
Line 1: Line 1:
'''Funtoo Xen Fun'''
+
This page exists to document [http://www.openstack.org OpenStack] configuration.
We are talking about Xen on Funtoo Linux and how to setup Xen virtualization properly.
+
Especially, we are going to show you how much fun it is to work with Xen hosts and domU's and
+
setting up a Funtoo Xen Server without general clicky GUI's or other frontends. This is true hardcore OS Xen setup especially for NOC server systems, headless servers, etc..
+
  
= Funtoo Xen Server with paravirt funtoo domU =
+
Note that the current approach is to use devstack, which is not a good way to learn OpenStack. So much of this document will be about doing a devstack-like configuration for Funtoo.
'''Assumptions'''
+
''We build a 64bit headless XEN hypervisor rockstable and rocket fast with a funtoo headless 64bit paravirt domU.''
+
We are '''not''' building Xen with pvgrub or hvm (which is kinda slow and overhead as long as you don't want to install Windoze).
+
  
== Buiding Funtoo Xen Host Dom0 ==
+
This document will split OpenStack configuration into each architectural component, describing configuration steps for each component separately.
Most of the necessary steps are covered in the Installation Tutorial.
+
We only do outline here the steps that are necessary to enjoy an easy and successful Dom0 setup or if something differs from the normal installation tutorial.
+
  
Please, open in a second tab the [[Installation (Tutorial)|Installation Tutorial]] and follow in both carefully the next steps!
+
== SQL Database ==
  
=== Basic Funtoo Xen Host Dom0 setup ===
+
A number of OpenStack services use a SQL back-end for storing various bits of data.
  
I recommend you use only stable packages for the host dom0 !
+
While DevStack uses MySQL for its SQL deployment, multiple database back-ends are actually supported thanks to [http://sqlalchemy.org SQLAlchemy] being used behind the scenes, which is a re-targetable Python database API. Thus, it should be possible to use Postgres, etc, by simply using different connection strings. A list of SQLAlchemy connection types can be found on [http://docs.sqlalchemy.org/en/latest/core/engines.html this SQLAlchemy documentation page.]
  
Please consider the decision carefully. I can't stress out enough, you will avoid a lot of problems taking the stable distrib as dom0.
+
Using a single root database user account for all services is not a good policy for production deployment. Ideally, each service should have its own restricted user account with only the ability to access its own database.
The domU guests could be either unstable or hardened, as you wish! There comes the true fun part ;-)
+
That's why I first edit my make.conf befor building anything!
+
  
Here is how I set up the system basics:
+
Let's look at how each service is configured in regards to SQL:
Disk is <tt>/dev/sda</tt>
+
  
<pre>
+
=== nova ===
/dev/sda1 is our / partition ca 20GB ext4
+
/dev/sda2 is our swap partition ca 4GB
+
/dev/sda3 holds the lvm volume group vgxen
+
</pre>
+
  
I am using volume groups over raid - which I strongly advice to everybody.
+
Here's how to set up a MySQL database back-end for nova and tell nova to initialize its database tables:
  
Store of xen stuff:
 
<pre>/etc/xen/ --> xend configuration files
 
/xen/configs/ --> my xen domU configuration files folder
 
/xen/kernel/ --> my xen domU kernel folder
 
/xen/disks/ --> my xen domU image files folder
 
</pre>
 
 
Edit <tt>/etc/rc.conf</tt> and uncomment the line at the bottom for rc_sys
 
<pre>rc_sys="xen0"</pre>
 
 
== Configure and Build Xen Dom0 Kernel ==
 
 
<console>
 
<console>
###i## emerge gentoo-sources
+
mysql> ##i##create database nova character set latin1;
###i## cd /usr/src/linux
+
Query OK, 1 row affected (0.02 sec)
###i## make menuconfig
+
</console>
+
  
These settings are current as of 3.2.1-gentoo-r2, other versions may vary:
+
mysql> ##i##grant all privileges on nova.* to nova@localhost identified by 'foobar';
 
+
Query OK, 0 rows affected (0.00 sec)
{{kernelop
+
|title=
+
|desc=
+
General setup  --->
+
  <*> Kernel .config support
+
      [*]  Enable access to .config through /proc/config.gz
+
 
+
Processor type and features  --->
+
  [*] Paravirtualized guest support  --->
+
      [*]  Xen guest support
+
 
+
Bus options (PCI etc.) --->
+
  [*]  Xen PCI Frontend 
+
 
+
[*] Networking support  --->
+
  Networking options  --->
+
      <*> 802.1d Ethernet Bridging
+
 
+
Device Drivers  --->
+
  [*] Block devices (NEW)  --->
+
      <M>  DRBD Distributed Replicated Block Device support
+
      < >  Xen virtual block device support
+
      <*>  Xen block-device backend driver
+
 
+
Device Drivers  --->
+
  [*] Network device support  --->
+
      < >  Xen network device frontend driver
+
      <*>  Xen backend network device
+
 
+
Device Drivers  --->
+
  Graphics support  --->
+
      -*- Support for frame buffer devices  ---
+
        < >  Xen virtual frame buffer support
+
 
+
Device Drivers  --->
+
  Xen driver support  --->
+
      [*] Xen memory balloon driver (NEW)
+
      [*]  Scrub pages before returning them to system (NEW)
+
      <*> Xen /dev/xen/evtchn device (NEW)
+
      [*] Backend driver support (NEW)
+
      <*> Xen filesystem (NEW)
+
      [*]  Create compatibility mount point /proc/xen (NEW)
+
      [*] Create xen entries under /sys/hypervisor (NEW)
+
      <M> userspace grant access device driver (NEW)
+
      <M> User-space grant reference allocator driver (NEW)
+
      <M> xen platform pci device driver (NEW)
+
 
+
File systems  --->
+
  < > Ext3 journalling file system support
+
  <*> The Extended 4 (ext4) filesystem
+
  [*]  Use ext4 for ext2/ext3 file systems (NEW)
+
  [*]  Ext4 extended attributes (NEW)
+
}}
+
{{Fancyimportant|Don't forget to add the required drivers for your networking and sata cards. If you use RAID, make sure to add the correct CONFIG_MD_RAID* entries to your config.}}
+
 
+
<console>
+
###i## make
+
###i## make modules_install
+
 
</console>
 
</console>
  
{{Fancynote| If you experience issues with connecting to the console ensure the module "xen_gntdev" (userspace grant access device driver) is loaded before the xenconsoled process is started (you may have to restart it after loading the module).}}
+
Now set the following connection string in <tt>/etc/nova/nova.conf</tt>:
 
+
== Configuring Grub ==
+
Work has been completed to automatically enable Xen Grub entries, so after you copy your dom0 kernel edit your /etc/boot.conf as follows:
+
  
 
<pre>
 
<pre>
"Funtoo on Xen" {
+
--sql_connection=mysql://nova:foobar@localhost/nova
  type xen
+
  xenkernel xen.gz
+
  xenparams loglvl=all guest_loglvl=all xsave=1 iommu=1 iommu_inclusive_mapping=1 dom0_max_vcpus=2 dom0_vcpus_pin dom0_mem=4096M
+
  kernel kernel[-v]
+
  params += quiet
+
}
+
 
</pre>
 
</pre>
  
{{Fancynote| iommu is the paravirtualized instructions, if your motherboard or CPU does not support VT-d do, not enable it. Xsave saves the supported CPU instruction sets -- without it you're dom0 kernel may not boot. dom0_vcpus_pin permanatly assigns cpu's to dom0 -- increasing performance.}}
+
Note the use of the latin1 character set when we created the tables in MySQL. This is so the following command will not cause an error due to the default UTF-8 character set creating indexes that are too big for MySQL to handle:
  
== Basic Networking with the Dom0 ==
 
Funtoo Linux offers its own modular, template-based network configuration system. This system offers a lot of flexibility for configuring network interfaces, essentially serving as a "network interface construction kit."
 
 
We are going to set eth0 as the default interface to the outside world for now. eth1 will be part of a bridge (xenbr0) that is going to be used by various domU guests.
 
 
Construct the interfaces:
 
 
<console>
 
<console>
###i## cd /etc/init.d/
+
# ##i##nova-manage db sync
###i## ln -s netif.tmpl netif.xenbr0
+
2012-03-02 21:31:14 DEBUG nova.utils [-] backend <module 'nova.db.sqlalchemy.migration' from '/usr/lib64/python2.7/site-packages/nova/db/sqlalchemy/migration.pyc'> from (pid=17779) __get_b
###i## ln -s netif.tmpl netif.extbr0
+
ackend /usr/lib64/python2.7/site-packages/nova/utils.py:602
###i## ln -s netif.tmpl netif.eth0
+
###i## ln -s netif.tmpl netif.eth1
+
###i## rc-update add netif.xenbr0 sysinit
+
###i## rc-update add netif.extbr0 sysinit
+
 
</console>
 
</console>
  
Make sure dhcpcd, eth0 and eth1 don't start at boot:
+
After running the command above, you should now have all the relevant database tables created:
<console>
+
###i## rc-update del dhcpcd sysinit
+
###i## rc-update del netif.eth0 sysinit
+
###i## rc-update del netif.eth1 sysinit
+
</console>
+
  
Configure the slave interfaces:
 
 
<console>
 
<console>
###i## cd /etc/conf.d/
+
xdev var # ##i##mysql -u root -p nova
###i## echo 'template="interface-noip"' > netif.eth0
+
Enter password:
###i## echo 'template="interface-noip"' > netif.eth1
+
Reading table information for completion of table and column names
</console>
+
You can turn off this feature to get a quicker startup with -A
Now, we prepare the bridges:
+
<console>
+
###i## nano netif.xenbr0
+
</console>
+
here we set the internal Xen bridge by editing <tt>/etc/conf.d/netif.xenbr0</tt>:
+
  
<pre>
+
Welcome to the MySQL monitor. Commands end with ; or \g.
template="bridge"
+
Your MySQL connection id is 16
ipaddr="10.0.1.200/24"
+
Server version: 5.1.61-log Gentoo Linux mysql-5.1.61
gateway="10.0.1.1"
+
nameservers="10.0.1.1 10.0.1.2"
+
domain="funtoo.org"
+
slaves="netif.eth0"
+
</pre>
+
  
Then, we set up the external interface:
+
Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
<console>
+
###i## nano netif.extbr0
+
</console>
+
{{Fancynote| This will look quite similar. Please watch out for the correct slave setting!}}
+
  
Now, edit <tt>/etc/conf.d/netif.extbr0</tt>:
+
Oracle is a registered trademark of Oracle Corporation and/or its
 +
affiliates. Other names may be trademarks of their respective
 +
owners.
  
 +
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  
<pre>
+
mysql> ##i##show tables;
template="bridge"
+
+-------------------------------------+
ipaddr="10.0.1.201/24"
+
| Tables_in_nova                      |
gateway="10.0.1.1"
+
+-------------------------------------+
nameservers="10.0.1.1 10.0.1.2"
+
| agent_builds                        |
domain="funtoo.org"
+
| aggregate_hosts                    |
slaves="netif.eth1"
+
| aggregate_metadata                  |
</pre>
+
| aggregates                          |
 +
| auth_tokens                        |
 +
| block_device_mapping                |
 +
| bw_usage_cache                      |
 +
| certificates                        |
 +
| compute_nodes                      |
 +
| console_pools                      |
 +
...
 +
</console>
  
This gives us the possibility to play around with various setups later, it's modular and easy to tweak and change.
+
You have now validated that nova is connecting to your MySQL database correctly.
  
{{Fancytip| It is probably a good idea to try starting the interfaces with rc before rebooting.}}
+
=== glance ===
  
== Basic Networking with domU ==
+
From glance.openstack.org:
  
The easiest way is to let Xen set up the networking. But if everything is up and running it is not possible to change the routings, etc.
+
<blockquote>The Glance project provides services for discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image.</blockquote>
Letting Xen do the bridges will be obsolete in the near future. So this is not the recommended way anymore. As we already set up the bridges in the previous section it may be enough to comment everything network related. If not, just un-comment the last lines.
+
  
We edit the /etc/xen/xend-config.sxp
+
Glance typically uses a MySQL database called <tt>glance</tt>, although the name is configurable in the connection string.
  
<pre>
+
SQL connection settings might be stored in a glance configuration file located at <tt>/opt/stack/glance/etc/glance-registry.conf</tt>. In the devstack installation process, <tt>/opt/stack/glance</tt> contains a git checkout of the glance software.
#### Xen config from maiwald.tk - Xen 4.x Network in bridge mode
+
  
(logfile /var/log/xen/xend.log)
+
The SQL connection configuration string might look something like this:
(loglevel DEBUG)
+
  
(xend-relocation-server no)
+
<pre>
(xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$')
+
sql_connection = mysql://glance:yourpassword@192.168.206.130/glance
 +
</pre>
  
# The limit (in kilobytes) on the size of the console buffer
+
More info on glance configuration is available [http://docs.openstack.org/diablo/openstack-compute/install/content/glance-registry-conf-file.html here].
(console-limit 1024)
+
  
(dom0-min-mem 384)
+
=== keystone ===
(enable-dom0-ballooning no)
+
  
(total_available_memory 0)
+
Keystone, the OpenStack identity service, also uses SQL. <tt>etc/keystone.conf</tt> keystone install/git repo directory is used to store the SQL configuration:
(dom0-cpus 0)
+
  
(vncpasswd 'geheim')
+
<pre>
 
+
sql_connection = %SQL_CONN%
# let xen create the net
+
# (network-script    network-bridge)
+
# (vif-script        vif-bridge)
+
 
+
# we create the net - new default in Xen 4
+
#
+
#(network-script 'network-bridge netdev=eth0 bridge=xenbr0 vifnum=0')
+
#(vif-script vif-bridge bridge=xenbr0)
+
 
</pre>
 
</pre>
  
= Building the Funtoo Xen DomU Container =
+
As everything else, the SQL connection string uses SQLAlchemy syntax.
  
We are going to build the DomU now, preparing first from outside the domU.
+
=== Quantum and Open VSwitch ===
  
=== create lvm volume or partition or image file ===
+
<blockquote>Quantum is an incubated OpenStack project to provide "network connectivity as a service" between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).</blockquote>
  
''This is a stub, please help completing this guide here!''
+
[http://openvswitch.org Open VSwitch] is described as:
  
<console>
+
<blockquote>Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple physical servers similar to VMware's vNetwork distributed vswitch or Cisco's Nexus 1000V. </blockquote>
###i## vgcreate vgxen /dev/sda3
+
###i## lvcreate -L10G -n funtoo_root vgxen
+
###i## lvcreate -L1G -n funtoo_swap vgxen
+
###i## vgchange -a y
+
###i## mkfs.ext4 -L funtoo_root /dev/vgxen/funtoo_root
+
###i## mkswap -L funtoo_swap /dev/vgxen/funtoo_swap
+
###i## rc-update add lvm boot
+
</console>
+
== Basic DomU System setup ==
+
=== mount domU lvm volume or physical partition or image file===
+
<console>
+
###i## mkdir /mnt/domu1
+
###i## mount /dev/vgxen/funtoo_root /mnt/domu1
+
###i## cd /mnt/domu1
+
</console>
+
  
=== get stage3 ===
+
There is an [http://openvswitch.org/openstack/documentation/ Open VSwitch Plug-in for OpenStack Quantum] which can be set up by DevStack. This plug-in uses SQL storage. The SQLAlchemy connection string is stored in (relative to git/install root) <tt>etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini</tt>. Similarly to everything but nova, the SQL connection string is stored in <tt>sql_connection =</tt> format.
from a funtoo mirror near you, I suggest you look at the funtoo homepage
+
  
<console>
+
If the plugin is enabled, the following settings are added to nova.conf:
###i## links http://www.funtoo.org/wiki/Download </console>
+
Then choose a mirror near you ( I use Heanet in EU ) and look for the right stage3. I use XEON CPUs so I take the core2 distrib:
+
+
<console>
+
###i## wget -cv http://ftp.heanet.ie/mirrors/funtoo/funtoo-stable/x86-64bit/core2_64/stage3-latest.tar.xz </console>
+
Unfortunately I can't find md5sums or similar which is really unpleasant.
+
  
=== Get latest portage tree from the snapshots firectory ===
 
 
<console>
 
###i## wget -cv http://ftp.heanet.ie/mirrors/funtoo/funtoo-stable/snapshots/portage-current.tar.xz </console>
 
=== Extract the stage3 ===
 
<console>
 
###i## tar xpf stage3-current.tar.xz
 
</console>
 
 
=== Extract Portage ===
 
 
<console>
 
###i## cd usr
 
###i## tar xf ../portage-current.tar.xz
 
</console>
 
 
== Preparing the chroot environment ==
 
=== Editing the make.conf ===
 
copy the <tt>/etc/make.conf</tt> from dom0 and adjust it:
 
 
<console>
 
###i## cp /etc/portage/make.conf /mnt/domu1/etc/
 
</console>
 
 
make sure to adjust MAKEOPTS to your assigned CPUs (rule of thumb: cpu cores +1 - yes, even in XEN)
 
<console>
 
###i## nano -w /mnt/domu1/etc/portage/make.conf
 
</console>
 
out there the MAKEOPTS variable in:
 
 
<pre>
 
<pre>
MAKEOPTS="-j2"
+
--libvirt_vif_type=ethernet
 +
--libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
 +
--linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
 +
--quantum_use_dhcp
 
</pre>
 
</pre>
  
=== Copy <tt>/etc/resolv.conf</tt> ===  
+
=== Melange ===
<console>
+
###i## cp -L /etc/resolv.conf /mnt/domu1/etc/
+
</console>
+
  
=== mount proc and dev ===
+
From the Melange site:
<console>
+
###i## mount -t proc none /mnt/domu1/proc
+
###i## mount --rbind /dev /mnt/domu1/dev
+
</console>
+
  
== Building Funtoo Xen Guest(s) DomU ==
+
<blockquote>
 +
Melange is intended to provide network information services for use across OpenStack services. The initial focus will be on IP address management (IPAM) and address discovery (DHCP/dnsmasq) functionality. Melange is intended to be a standalone service with it's own API. However, the initial use case will be to decouple existing IP management and VM address discovery from Nova and support the existing Nova networking capabilities.
 +
</blockquote>
  
== Final DomU System setup ==
+
Melange also uses a <tt>sql_connection =</tt> string stored in its <tt>etc/melange/melange.conf</tt> (relative to install/git root).
=== chroot ===
+
<console>
+
###i## chroot /mnt/domu1 /bin/bash
+
###i## env-update
+
###i## source /etc/profile
+
###i## export PS1="(domU-chroot) $PS1"
+
</console>
+
  
=== sync portage ===
+
== RabbitMQ ==
<console>
+
###i## emerge --sync
+
</console>
+
  
=== set locales ===
+
RabbitMQ  is a reliable messaging framework used by OpenStack. Currently, it looks like only nova uses it. Nova is configured to connect to rabbitmq by setting the following lines in <tt>/etc/nova/nova.conf</tt>:
<console>
+
###i## nano -w /etc/locale.gen
+
###i## locale-gen
+
</console>
+
 
+
=== Set your timezone ===
+
(choose your timezone in <tt>/usr/share/zoneinfo</tt>)
+
<console>
+
###i## ln -v -sf /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime
+
</console>
+
 
+
=== Edit <tt>/etc/fstab</tt> (see also gentoo handbook as reference) ===
+
We assume that we name our root partition <tt>xvda1</tt> and the swap partition <tt>xvda2</tt> in our <tt>domU-xen-</tt> config (we will do that later)
+
<console>
+
###i## nano -w /etc/fstab
+
</console>
+
  
 
<pre>
 
<pre>
/dev/xvda1      /              ext4    noatime 0 1
+
--rabbit_host=$RABBIT_HOST
/dev/xvda2      none          swap    sw      0 0
+
--rabbit_password=$RABBIT_PASSWORD
shm            /dev/shm      tmpfs  nodev,nosuid,noexec    0 0
+
 
</pre>
 
</pre>
  
=== The most important stuff ===
+
Rabbit's password is configured using the following command, as root:
Copy this into your terminal:
+
 
+
<pre>
+
echo '
+
                        Larry loves Funtoo
+
                      _________________________
+
                      < Have you mooed today? >
+
                      -------------------------
+
                        \  ^__^
+
                        \  (oo)\_______
+
                            (__)\      )\/\
+
                                ||----w |
+
                                ||    ||
+
.::::::::::::::: WELCOME TO ^^^^^^^^^^^^^^^^^^^:::::::::::::..
+
...............................................................
+
:########:'##::::'##:'##::: ##:'########::'#######:::'#######::.
+
:##.....:: ##:::: ##: ###:: ##:... ##..::'##.... ##:'##.... ##::
+
:##::::::: ##:::: ##: ####: ##:::: ##:::: ##:::: ##: ##:::: ##::
+
:######::: ##:::: ##: ## ## ##:::: ##:::: ##:::: ##: ##:::: ##::
+
:##...:::: ##:::: ##: ##. ####:::: ##:::: ##:::: ##: ##:::: ##::
+
:##::::::: ##:::: ##: ##:. ###:::: ##:::: ##:::: ##: ##:::: ##::
+
:##:::::::. #######:: ##::. ##:::: ##::::. #######::. #######::′
+
.::::::::::.......:::..::::..:::::..::::::.......::::.......::´
+
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
+
'> /etc/motd
+
</pre>
+
We are using the echo instead of „emerge --moo „ as larry still moo's in gentoo'ish
+
 
+
So that's it - almost.
+
 
+
==== Adding networking to the domU: ====
+
  
 
<console>
 
<console>
(domU-chroot) ###i## cd /etc/init.d/
+
# ##i##rabbitmqctl change_password guest $RABBIT_PASSWORD
(domU-chroot) ###i## ln -sf netif.tmpl netif.eth0
+
(domU-chroot) ###i## rc-update add netif.eth0
+
* service netif.eth0 added to runlevel sysinit
+
 
</console>
 
</console>
  
==== Now we are ready for the final setups ====
+
I am not yet completely sure how RabbitMQ fits into the OpenStack architecture. It may be that the supporting services expect it to be running locally, and that Nova compute nodes need to hook into a Nova instance, which would typically be running remotely. (Thus the ability for DevStack to target a remote RabbitMQ host.)
<console>
+
(domU-chroot) ###i## emerge eix
+
(domU-chroot) ###i## eix-update
+
Reading Portage settings ..
+
Building database (/var/cache/eix) ..
+
[0] "gentoo" /usr/portage/ (cache: metadata-md5-or-flat)
+
    Reading category 154|154 (100%) Finished           
+
Applying masks ..
+
Calculating hash tables ..
+
Writing database file /var/cache/eix ..
+
Database contains 15729 packages in 154 categories.
+
 
+
(domU-chroot) # exit
+
exit
+
</console>
+
 
+
From here you have to decide how you want to run your domU: with unpriviledged users and sudo or with a root account enabled or as a webserver or firewall.
+
 
+
I always install the openssh server and just place my ssh keys in there. From there the steps differ.
+
 
+
<console>
+
(dom0-xen) ###i## cp /root/.ssh/authorized_keys /mnt/domu1/root/.ssh/
+
</console>
+
Also, don't forget to enable PubKeyAuth in your sshd_config in your domU and set <tt>PermitRootLogin</tt> to yes!
+
 
+
'''Double checking''': Does your domU use kernel modules or not? If you haven't built a monolitic kernel you should copy the modules from the dom0 to the domU now:
+
<console>
+
(dom0-xen) ###i## mkdir /mnt/domu1/lib/modules
+
(dom0-xen) ###i## rsync -aP /lib/modules/2.6.38-xen-maiwald.tk-dom0 /mnt/domu1/lib/modules/
+
</console>
+
 
+
Don't forget to clean up the mounts!
+
 
+
<console>
+
(dom0-xen) ###i## cd
+
(dom0-xen) ###i## umount -l /mnt/domu1/proc
+
(dom0-xen) ###i## umount -l /mnt/domu1/dev
+
(dom0-xen) ###i## umount -l /mnt/domu1
+
</console>
+
 
+
=== Booting the Xen DomU Guest ===
+
 
+
Ok, let's try the first boot of the newly created Xen DomU in Funtoo!
+
 
+
<console>
+
(dom0-xen) ###i## cd /xen
+
(dom0-xen) ###i## xm create -c configs/funtoo.cfg
+
</console>
+
Huuuuiiiii.....
+
<pre>
+
Using config file "./configs/funtoo.cfg".
+
Started domain funtoo (id=4)
+
[    0.000000] Linux version 2.6.38-xen-maiwald.tk-domU (root@xen) (gcc version 4.4.5 (Gentoo 4.4.5 p1.0, pie-0.4.5) ) #4 SMP Wed Feb 8 17:30:33 CET 2012
+
[    0.000000] Command line: root=/dev/xvda1 ro ip=217.x.x.211:127.0.255.255:217.x.x.1:255.255.255.0:domU:eth0:off xencons=tty console=xvc0 raid=noautodetect
+
[    0.000000] Xen-provided physical RAM map:
+
[    0.000000]  Xen: 0000000000000000 - 0000000040800000 (usable)
+
[    0.000000] NX (Execute Disable) protection: active
+
[    0.000000] last_pfn = 0x40800 max_arch_pfn = 0x80000000
+
[    0.000000] init_memory_mapping: 0000000000000000-0000000040800000
+
[    0.000000] Zone PFN ranges:
+
[    0.000000]  DMA      0x00000000 -> 0x00001000
+
[    0.000000]  DMA32    0x00001000 -> 0x00100000
+
[    0.000000]  Normal  empty
+
[    0.000000] Movable zone start PFN for each node
+
[    0.000000] early_node_map[2] active PFN ranges
+
[    0.000000]    0: 0x00000000 -> 0x00040000
+
[    0.000000]    0: 0x00040800 -> 0x00040800
+
[    0.000000] setup_percpu: NR_CPUS:16 nr_cpumask_bits:16 nr_cpu_ids:1 nr_node_ids:1
+
[    0.000000] PERCPU: Embedded 18 pages/cpu @ffff88003efc0000 s42304 r8192 d23232 u73728
+
[    0.000000] Swapping MFNs for PFN 6d6 and 3efc7 (MFN 15deb0 and 1223bf)
+
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 256109
+
[    0.000000] Kernel command line: root=/dev/xvda1 ro ip=217.171.190.211:127.0.255.255:217.171.190.1:255.255.255.0:alyx1:eth0:off xencons=tty console=xvc0 raid=noautodetect
+
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
+
[    0.000000] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes)
+
[    0.000000] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes)
+
[    0.000000] Software IO TLB disabled
+
[    0.000000] Memory: 1022732k/1056768k available (3657k kernel code, 8192k absent, 25844k reserved, 1261k data, 264k init)
+
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
+
[    0.000000] Hierarchical RCU implementation.
+
[    0.000000] NR_IRQS:96
+
[    0.000000] Xen reported: 2992.570 MHz processor.
+
[    0.000000] Console: colour dummy device 80x25
+
[    0.000000] console [tty-1] enabled
+
[    0.150003] Calibrating delay using timer specific routine.. 6018.63 BogoMIPS (lpj=30093193)
+
[    0.150008] pid_max: default: 32768 minimum: 301
+
[    0.150034] Mount-cache hash table entries: 256
+
[    0.150173] SMP alternatives: switching to UP code
+
[    0.170232] Freeing SMP alternatives: 20k freed
+
[    0.170342] Brought up 1 CPUs
+
[    0.170377] devtmpfs: initialized
+
[    0.170601] xor: automatically using best checksumming function: generic_sse
+
[    0.220004]    generic_sse:  7325.200 MB/sec
+
[    0.220008] xor: using function: generic_sse (7325.200 MB/sec)
+
[    0.220091] NET: Registered protocol family 16
+
[    0.220186] Brought up 1 CPUs
+
[    0.220217] bio: create slab <bio-0> at 0
+
[    0.390014] raid6: int64x1  2353 MB/s
+
[    0.560003] raid6: int64x2  2964 MB/s
+
[    0.730026] raid6: int64x4  2357 MB/s
+
[    0.900012] raid6: int64x8  2116 MB/s
+
[    1.070007] raid6: sse2x1    5349 MB/s
+
[    1.240009] raid6: sse2x2    5404 MB/s
+
[    1.410005] raid6: sse2x4    8597 MB/s
+
[    1.410008] raid6: using algorithm sse2x4 (8597 MB/s)
+
[    1.410022] suspend: event channel 6
+
[    1.410022] xen_mem: Initialising balloon driver.
+
[    1.410096] Switching to clocksource xen
+
[    1.410125] FS-Cache: Loaded
+
[    1.410152] CacheFiles: Loaded
+
[    1.410268] NET: Registered protocol family 2
+
[    1.410288] IP route cache hash table entries: 32768 (order: 6, 262144 bytes)
+
[    1.410391] TCP established hash table entries: 131072 (order: 9, 2097152 bytes)
+
[    1.410951] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
+
[    1.411180] TCP: Hash tables configured (established 131072 bind 65536)
+
[    1.411183] TCP reno registered
+
[    1.411186] UDP hash table entries: 512 (order: 2, 16384 bytes)
+
[    1.411192] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes)
+
[    1.411229] NET: Registered protocol family 1
+
[    1.411290] platform rtc_cmos: registered platform RTC device (no PNP device found)
+
[    1.411401] Intel AES-NI instructions are not detected.
+
[    1.411437] audit: initializing netlink socket (disabled)
+
[    1.411444] type=2000 audit(1330014455.606:1): initialized
+
[    1.412612] fuse init (API version 7.16)
+
[    1.412674] msgmni has been set to 2048
+
[    1.412990] NET: Registered protocol family 38
+
[    1.413018] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 253)
+
[    1.413024] io scheduler noop registered (default)
+
[    1.413026] io scheduler deadline registered
+
[    1.413049] io scheduler cfq registered
+
[    1.413079] Non-volatile memory driver v1.3
+
[    1.413088] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
+
[    1.413090] Hangcheck: Using getrawmonotonic().
+
[    1.419520] Switched to NOHz mode on CPU #0
+
[    1.423394] brd: module loaded
+
[    1.423665] loop: module loaded
+
[    1.423771] nbd: registered device at major 43
+
[    1.426180] Xen virtual console successfully installed as tty1
+
[    1.426216] Event-channel device installed.
+
[    1.441658] netfront: Initialising virtual ethernet driver.
+
[    1.444972] xen-vbd: registered block device major 202
+
[    1.444988] blkfront: xvda1: barriers enabled
+
[    1.450287] Setting capacity to 20971520
+
[    1.450294] xvda1: detected capacity change from 0 to 10737418240
+
[    1.450677] blkfront: xvda2: barriers enabled
+
[    1.451661] Setting capacity to 2097152
+
[    1.451665] xvda2: detected capacity change from 0 to 1073741824
+
[    1.452020] bonding: Ethernet Channel Bonding Driver: v3.7.0 (June 2, 2010)
+
[    1.452023] bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details.
+
[    1.453016] i8042: No controller found
+
[    1.453066] mousedev: PS/2 mouse device common for all mice
+
[    1.453113] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
+
[    1.453145] rtc_cmos: probe of rtc_cmos failed with error -38
+
[    1.453155] md: linear personality registered for level -1
+
[    1.453158] md: raid0 personality registered for level 0
+
[    1.453161] md: raid1 personality registered for level 1
+
[    1.453163] md: raid6 personality registered for level 6
+
[    1.453166] md: raid5 personality registered for level 5
+
[    1.453168] md: raid4 personality registered for level 4
+
[    1.453224] device-mapper: uevent: version 1.0.3
+
[    1.453273] device-mapper: ioctl: 4.19.1-ioctl (2011-01-07) initialised: dm-devel@redhat.com
+
[    1.453340] device-mapper: multipath: version 1.2.0 loaded
+
[    1.453343] device-mapper: multipath round-robin: version 1.0.0 loaded
+
[    1.453345] device-mapper: multipath queue-length: version 0.1.0 loaded
+
[    1.453347] device-mapper: multipath service-time: version 0.2.0 loaded
+
[    1.453396] Netfilter messages via NETLINK v0.30.
+
[    1.453410] nf_conntrack version 0.5.0 (8192 buckets, 32768 max)
+
[    1.453478] ctnetlink v0.93: registering with nfnetlink.
+
[    1.453486] IPv4 over IPv4 tunneling driver
+
[    1.453548] TCP westwood registered
+
[    1.453550] TCP highspeed registered
+
[    1.453552] TCP htcp registered
+
[    1.453553] TCP vegas registered
+
[    1.453555] Initializing XFRM netlink socket
+
[    1.453630] NET: Registered protocol family 10
+
[    1.453803] IPv6 over IPv4 tunneling driver
+
[    1.453863] NET: Registered protocol family 17
+
[    1.453868] NET: Registered protocol family 15
+
[    1.453870] Registering the dns_resolver key type
+
[    1.550094] /usr/src/linux-2.6.38-xen/drivers/rtc/hctosys.c: unable to open rtc device (rtc0)
+
[    3.070104] IP-Config: Complete:
+
[    3.070109]      device=eth0, addr=217.171.190.211, mask=255.255.255.0, gw=217.171.190.1,
+
[    3.070116]      host=alyx1, domain=, nis-domain=(none),
+
[    3.070119]      bootserver=127.0.255.255, rootserver=127.0.255.255, rootpath=
+
[    3.070212] md: Skipping autodetection of RAID arrays. (raid=autodetect will force)
+
[    3.107309] EXT4-fs (xvda1): mounted filesystem with ordered data mode. Opts: (null)
+
[    3.107321] VFS: Mounted root (ext2 filesystem) readonly on device 202:1.
+
[    3.140059] devtmpfs: mounted
+
[    3.140239] Freeing unused kernel memory: 264k freed
+
INIT: version 2.88 booting
+
 
+
  OpenRC 0.8.3 is starting up Funtoo Linux (x86_64)
+
 
+
* Mounting /proc ...
+
[ ok ]
+
* WARNING: rc_sys not defined in rc.conf. Falling back to automatic detection
+
* Caching service dependencies ...
+
[ ok ]
+
* Mounting /sys ...
+
[ ok ]
+
* udev: /dev already mounted, skipping...
+
* Mounting /dev/pts ...
+
[ ok ]
+
* Mounting /dev/shm ...
+
[ ok ]
+
* Bringing up network interface lo ...
+
RTNETLINK answers: File exists
+
[ ok ]
+
* Bringing up network interface lo ...
+
RTNETLINK answers: File exists
+
RTNETLINK answers: File exists
+
[ ok ]
+
* Starting udevd daemon ...
+
* Populating /dev with existing devices through uevents ...
+
[ ok ]
+
* Autoloaded 0 module(s)
+
* Checking local filesystems  ...
+
funtoo_root: Superblock last write time is in the future.
+
        (by less than a day, probably due to the hardware clock being incorrectly set).  FIXED.
+
funtoo_root: clean, 173796/655360 files, 436917/2621440 blocks
+
[ ok ]
+
* Remounting root filesystem read/write ...
+
[ ok ]
+
* Updating /etc/mtab ...
+
[ ok ]
+
* Mounting local filesystems ...
+
[ ok ]
+
* Configuring kernel parameters ...
+
[ ok ]
+
* Creating user login records ...
+
[ ok ]
+
* Cleaning /var/run ...
+
[ ok ]
+
* Wiping /tmp directory ...
+
[ ok ]
+
* Setting hostname to localhost ...
+
[ ok ]
+
* Activating swap devices ...
+
[ ok ]
+
* udev: storing persistent rules ...
+
[ ok ]
+
* Initializing random number generator ...
+
[ ok ]
+
INIT: Entering runlevel: 3
+
* Mounting network filesystems ...
+
[ ok ]
+
* Generating dsa host key ...
+
Generating public/private dsa key pair.
+
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
+
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
+
The key fingerprint is:
+
25:e0:a8:05:xxxxxxxxxxxx:1c:1f:ba root@localhost
+
The key's randomart image is:
+
+--[ DSA 1024]----+
+
|  ooo.B.o        |
+
| o o *.B o .    |
+
|  . + + = =      |
+
|  o  + *      |
+
|  .  E S        |
+
|                |
+
|                |
+
|                |
+
|                |
+
+-----------------+
+
[ ok ]
+
* Generating rsa host key ...
+
Generating public/private rsa key pair.
+
Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
+
Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
+
The key fingerprint is:
+
22:e3:46:28:67:xxxxxxxxxxxxxxxxxxxxx:e5:c3 root@localhost
+
The key's randomart image is:
+
+--[ RSA 2048]----+
+
|.    o. ..      |
+
|oo  o ..o        |
+
|=oo  o  E      |
+
|.*oo.    .      |
+
|o *.+ . S        |
+
| + o o .        |
+
|    o            |
+
|  .            |
+
|                |
+
+-----------------+
+
[ ok ]
+
* Starting sshd ...
+
[ ok ]
+
* Starting local
+
[ ok ]
+
 
+
 
+
                        Larry loves Funtoo
+
                      _________________________
+
                      < Have you mooed today? >
+
                      -------------------------
+
                          ^__^
+
                          (oo)_______
+
                            (__)      )/
+
                                ||----w |
+
                                ||    ||
+
.::::::::::::::::::::: WELCOME TO ::::::::::::::::::::::::::..
+
...............................................................
+
:########:'##::::'##:'##::: ##:'########::'#######:::'#######::.
+
:##.....:: ##:::: ##: ###:: ##:... ##..::'##.... ##:'##.... ##::
+
:##::::::: ##:::: ##: ####: ##:::: ##:::: ##:::: ##: ##:::: ##::
+
:######::: ##:::: ##: ## ## ##:::: ##:::: ##:::: ##: ##:::: ##::
+
:##...:::: ##:::: ##: ##. ####:::: ##:::: ##:::: ##: ##:::: ##::
+
:##::::::: ##:::: ##: ##:. ###:::: ##:::: ##:::: ##: ##:::: ##::
+
:##:::::::. #######:: ##::. ##:::: ##::::. #######::. #######::′
+
.::::::::::.......:::..::::..:::::..::::::.......::::.......::´
+
This is localhost.unknown_domain (Linux x86_64 2.6.38-xen-maiwald.tk-domU) 17:27:40
+
 
+
localhost login:
+
</pre>
+
 
+
=== Finalizing the setup ===
+
Now we test if we can reach the DomU from our Desktop:
+
<console>
+
(2034)-~% ssh -lroot 217.x.x.211 
+
The authenticity of host '217.x.x.211 (217.x.x.211)' can't be established.
+
RSA key fingerprint is 22:e3:xxxxxxxx:b0:3c:xxxxx:d6:e5:c3.
+
Are you sure you want to continue connecting (yes/no)? yes
+
Warning: Permanently added '217.x.x.211' (RSA) to the list of known hosts.
+
Enter passphrase for key '/home/mm/.ssh/id_rsa':
+
localhost ~ # uname -a
+
Linux localhost 2.6.38-xen-maiwald.tk-domU #4 SMP Wed Feb 8 17:30:33 CET 2012 x86_64 Intel(R) Xeon(R) CPU E3110 @ 3.00GHz GenuineIntel GNU/Linux
+
localhost ~ #
+
</console>
+
  
Now switch back to the Funtoo [[Installation (Tutorial)|Installation Tutorial]] and go on with setting up your new domU guest like a normal funtoo linux system!
+
== Virtualization Technology ==
  
'''Please consider supporting this Wiki by editing this page and keeping it current!'''
+
DevStack defaults to configuring OpenStack to use libvirt with [[KVM]], and will fall back to basic [[QEMU]] support if the <tt>kvm</tt> kernel module is not available. It also has support for using libvirt with [[LXC]], in addition to using [[Xen]] Server directly (bypassing libvirt.)
  
Funtoo is a perfect Xen Host and I recommend it to everybody as an alternative to .deb/.rpm Systems.
 
  
Have fun!
 
 
[[Category:Virtualization]]
 
[[Category:Virtualization]]
 
[[Category:Featured]]
 
[[Category:Featured]]
 +
[[Category:OpenStack]]

Latest revision as of 22:58, 19 February 2014

This page exists to document OpenStack configuration.

Note that the current approach is to use devstack, which is not a good way to learn OpenStack. So much of this document will be about doing a devstack-like configuration for Funtoo.

This document will split OpenStack configuration into each architectural component, describing configuration steps for each component separately.

Contents

[edit] SQL Database

A number of OpenStack services use a SQL back-end for storing various bits of data.

While DevStack uses MySQL for its SQL deployment, multiple database back-ends are actually supported thanks to SQLAlchemy being used behind the scenes, which is a re-targetable Python database API. Thus, it should be possible to use Postgres, etc, by simply using different connection strings. A list of SQLAlchemy connection types can be found on this SQLAlchemy documentation page.

Using a single root database user account for all services is not a good policy for production deployment. Ideally, each service should have its own restricted user account with only the ability to access its own database.

Let's look at how each service is configured in regards to SQL:

[edit] nova

Here's how to set up a MySQL database back-end for nova and tell nova to initialize its database tables:

mysql> create database nova character set latin1;
Query OK, 1 row affected (0.02 sec)

mysql> grant all privileges on nova.* to nova@localhost identified by 'foobar';
Query OK, 0 rows affected (0.00 sec)

Now set the following connection string in /etc/nova/nova.conf:

--sql_connection=mysql://nova:foobar@localhost/nova

Note the use of the latin1 character set when we created the tables in MySQL. This is so the following command will not cause an error due to the default UTF-8 character set creating indexes that are too big for MySQL to handle:

# nova-manage db sync
2012-03-02 21:31:14 DEBUG nova.utils [-] backend <module 'nova.db.sqlalchemy.migration' from '/usr/lib64/python2.7/site-packages/nova/db/sqlalchemy/migration.pyc'> from (pid=17779) __get_b
ackend /usr/lib64/python2.7/site-packages/nova/utils.py:602

After running the command above, you should now have all the relevant database tables created:

xdev var # mysql -u root -p nova
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 16
Server version: 5.1.61-log Gentoo Linux mysql-5.1.61

Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show tables;
+-------------------------------------+
| Tables_in_nova                      |
+-------------------------------------+
| agent_builds                        |
| aggregate_hosts                     |
| aggregate_metadata                  |
| aggregates                          |
| auth_tokens                         |
| block_device_mapping                |
| bw_usage_cache                      |
| certificates                        |
| compute_nodes                       |
| console_pools                       |
...

You have now validated that nova is connecting to your MySQL database correctly.

[edit] glance

From glance.openstack.org:

The Glance project provides services for discovering, registering, and retrieving virtual machine images. Glance has a RESTful API that allows querying of VM image metadata as well as retrieval of the actual image.

Glance typically uses a MySQL database called glance, although the name is configurable in the connection string.

SQL connection settings might be stored in a glance configuration file located at /opt/stack/glance/etc/glance-registry.conf. In the devstack installation process, /opt/stack/glance contains a git checkout of the glance software.

The SQL connection configuration string might look something like this:

sql_connection = mysql://glance:yourpassword@192.168.206.130/glance

More info on glance configuration is available here.

[edit] keystone

Keystone, the OpenStack identity service, also uses SQL. etc/keystone.conf keystone install/git repo directory is used to store the SQL configuration:

sql_connection = %SQL_CONN%

As everything else, the SQL connection string uses SQLAlchemy syntax.

[edit] Quantum and Open VSwitch

Quantum is an incubated OpenStack project to provide "network connectivity as a service" between interface devices (e.g., vNICs) managed by other Openstack services (e.g., nova).

Open VSwitch is described as:

Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple physical servers similar to VMware's vNetwork distributed vswitch or Cisco's Nexus 1000V.

There is an Open VSwitch Plug-in for OpenStack Quantum which can be set up by DevStack. This plug-in uses SQL storage. The SQLAlchemy connection string is stored in (relative to git/install root) etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini. Similarly to everything but nova, the SQL connection string is stored in sql_connection = format.

If the plugin is enabled, the following settings are added to nova.conf:

 --libvirt_vif_type=ethernet
 --libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtOpenVswitchDriver
 --linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
 --quantum_use_dhcp

[edit] Melange

From the Melange site:

Melange is intended to provide network information services for use across OpenStack services. The initial focus will be on IP address management (IPAM) and address discovery (DHCP/dnsmasq) functionality. Melange is intended to be a standalone service with it's own API. However, the initial use case will be to decouple existing IP management and VM address discovery from Nova and support the existing Nova networking capabilities.

Melange also uses a sql_connection = string stored in its etc/melange/melange.conf (relative to install/git root).

[edit] RabbitMQ

RabbitMQ is a reliable messaging framework used by OpenStack. Currently, it looks like only nova uses it. Nova is configured to connect to rabbitmq by setting the following lines in /etc/nova/nova.conf:

--rabbit_host=$RABBIT_HOST
--rabbit_password=$RABBIT_PASSWORD

Rabbit's password is configured using the following command, as root:

# rabbitmqctl change_password guest $RABBIT_PASSWORD

I am not yet completely sure how RabbitMQ fits into the OpenStack architecture. It may be that the supporting services expect it to be running locally, and that Nova compute nodes need to hook into a Nova instance, which would typically be running remotely. (Thus the ability for DevStack to target a remote RabbitMQ host.)

[edit] Virtualization Technology

DevStack defaults to configuring OpenStack to use libvirt with KVM, and will fall back to basic QEMU support if the kvm kernel module is not available. It also has support for using libvirt with LXC, in addition to using Xen Server directly (bypassing libvirt.)