KVM

From Funtoo
Revision as of 18:31, April 23, 2013 by 46.59.61.80 (talk) (Changed qemu-kvm to qemu (http://forums.funtoo.org/viewtopic.php?id=1657))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

KVM is a hardware-accelerated full-machine hypervisor and virtualization solution included as part of kernel 2.6.20 and later. It allows you to create and start hardware-accelerated virtual machines under Linux using the QEMU tools.

Kernel Setup

To enable KVM, the following kernel config parameters should be enabled (this is based on a 3.x kernel):

Under Processor type and features, enable Paravirtualized Guest Support. Under the Paravirtualized Guest Support menu, enable any options related to KVM, such as KVM paravirtualized clock and in particular KVM Guest Support.

Under the Virtualization category from the main kernel config menu, enable Kernel-based Virtual Machine (KVM) support, and enable at least one type of KVM, either for Intel or AMD processors. It is also recommended to enable Host kernel acceleration for virtio net.

You can use modules or build these parts directly into the kernel. Build your new kernel and modules, and reboot.

User-space tools

KVM is essentially a kernel-accelerated version of QEMU. To enable KVM support in the user-space tools, add the following lines to /etc/make.conf:

QEMU_SOFTMMU_TARGETS="i386 x86_64"
QEMU_USER_TARGETS="i386 x86_64"

Once the make.conf variables above are set, emerge qemu:

root #  emerge qemu

Initial Setup

Prior to using KVM, modprobe the appropriate accelerated driver for Intel or AMD:

root # modprobe kvm_intel

Starting your first KVM virtual machine

To start your first KVM virtual machine, first download SysRescueCD and save it to systemrescuecd.iso. Then use the following commands, which will create a 10GB qcow disk image to use for the first disk, and then the next command will start your virtual machine, booting from the CD:

root # qemu-img create -f qcow2 vdisk.qcow2 10
root # qemu-system-x86_64 vdisk.qcow2 -m 1024 -cdrom systemrescuecd.iso  -vnc 127.0.0.1:1 -cpu host -net nic -net user
VNC server running on `127.0.0.1:5900'

Now you should be able to use a VNC client to connect to 127.0.0.1:5901 (VNC session 1) and access your virtual machine.

Networking Options

Above, networking will be enabled but will be on its own private LAN, and ping will not work. If you have a local bridge that you use for networking, the following steps will allow you use your existing bridge to provide higher-performance and full-featured network access to your virtual machine.

First, create /etc/qemu-ifup and add the following to it. Replace brlan with the name of your bridge:

#!/bin/bash
ifconfig $1 0.0.0.0 promisc up
brctl addif brlan $1
sleep 2

Make it executable:

root # chmod +x /etc/qemu-ifup

Start the virtual machine as follows:

root # qemu-system-x86_64 vdisk.qcow2 -m 1024 -cdrom systemrescuecd-x86-2.8.0.iso -cpu host -vnc 127.0.0.1:1 -net nic -net tap,id=foo

Tweaking KVM

VNC Output

If you wanted to have VNC listen on a different IP address or port, you can use the format -vnc IP:vncnum which will cause VNC to listen on the IP specified, and the TCP port 5900+vncnum.

CPU Settings

By default, the KVM guest will have one CPU with one core. To change this, use -cpu host (to export all of the host's CPU features) and -smp cores=X,threads=Y, where X is the number of cores, and Y is the number of threads on each core. You can emulate more CPUs and cores than you actually have.