KVM is a hardware-accelerated full-machine hypervisor and virtualization solution included as part of kernel 2.6.20 and later. It allows you to create and start hardware-accelerated virtual machines under Linux using the QEMU tools.
To enable KVM, the following kernel config parameters should be enabled (this is based on a 3.x kernel):
Under Processor type and features, enable Linux guest support, and enable the following options:
Under Processor type and features-->Linux guest support:
--- Linux guest support [*] Enable paravirtualization code [ ] paravirt-ops debugging (NEW) [*] Paravirtualization layer for spinlocks [ ] Xen guest support (NEW) [*] KVM Guest support (including kvmclock) (NEW) [ ] Enable debug information for KVM Guests in debugfs (NEW) [ ] Paravirtual steal time accounting (NEW)
Under the Virtualization category from the main kernel config menu, enable Kernel-based Virtual Machine (KVM) support, and enable at least one type of KVM, either for Intel or AMD processors. It is also recommended to enable Host kernel acceleration for virtio net.
--- Virtualization <M> Kernel-based Virtual Machine (KVM) support <M> KVM for Intel processors support <M> KVM for AMD processors support [*] KVM legacy PCI device assignment support <M> Host kernel accelerator for virtio net
You can use modules or build these parts directly into the kernel. Build your new kernel and modules, and reboot.
If you are using QEMU on your desktop, add the following USE flag to /etc/portage/make.conf:
This will enable good mouse support for QEMU on your desktop.
Now, emerge qemu:
# emerge qemu
Prior to using KVM, modprobe the appropriate accelerated driver for Intel or AMD:
# modprobe kvm_intel
Starting your first KVM virtual machine
To start your first KVM virtual machine, first download SysRescueCD and save it to systemrescuecd.iso. Then use the following commands, which will create a 10GB qcow disk image to use for the first disk, and then the next command will start your virtual machine, booting from the CD:
# qemu-img create -f qcow2 vdisk.qcow2 10 # qemu-system-x86_64 vdisk.qcow2 -m 1024 -cdrom systemrescuecd.iso -vnc 127.0.0.1:1 -cpu host -net nic -net user VNC server running on `127.0.0.1:5900'
Now you should be able to use a VNC client to connect to 127.0.0.1:5901 (VNC session 1) and access your virtual machine.
Above, networking will be enabled but will be on its own private LAN, and ping will not work. If you have a local bridge that you use for networking, the following steps will allow you use your existing bridge to provide higher-performance and full-featured network access to your virtual machine.
First, create /etc/qemu-ifup and add the following to it. Replace brlan with the name of your bridge:
#!/bin/bash ifconfig $1 0.0.0.0 promisc up brctl addif brlan $1 sleep 2
Make it executable:
# chmod +x /etc/qemu-ifup
Start the virtual machine as follows:
# qemu-system-x86_64 vdisk.qcow2 -m 1024 -cdrom systemrescuecd-x86-2.8.0.iso -cpu host -vnc 127.0.0.1:1 -net nic -net tap,id=foo
If you wanted to have VNC listen on a different IP address or port, you can use the format -vnc IP:vncnum which will cause VNC to listen on the IP specified, and the TCP port 5900+vncnum.
By default, the KVM guest will have one CPU with one core. To change this, use -cpu host (to export all of the host's CPU features) and -smp cores=X,threads=Y, where X is the number of cores, and Y is the number of threads on each core. You can emulate more CPUs and cores than you actually have.