KVM is a hardware-accelerated full-machine hypervisor and virtualization solution included as part of kernel 2.6.20 and later. It allows you to create and start hardware-accelerated virtual machines under Linux using the QEMU tools.
To enable KVM, the following kernel config parameters should be enabled (this is based on a 3.x kernel):
Under Processor type and features, enable Paravirtualized Guest Support. Under the Paravirtualized Guest Support menu, enable any options related to KVM, such as KVM paravirtualized clock and in particular KVM Guest Support.
Under the Virtualization category from the main kernel config menu, enable Kernel-based Virtual Machine (KVM) support, and enable at least one type of KVM, either for Intel or AMD processors. It is also recommended to enable Host kernel acceleration for virtio net.
You can use modules or build these parts directly into the kernel. Build your new kernel and modules, and reboot.
KVM is essentially a kernel-accelerated version of QEMU. To enable KVM support in the user-space tools, add the following lines to /etc/make.conf:
QEMU_SOFTMMU_TARGETS="i386 x86_64" QEMU_USER_TARGETS="i386 x86_64"
Once the make.conf variables above are set, emerge qemu-kvm:
# emerge qemu-kvm
Prior to using KVM, modprobe the appropriate accelerated driver for Intel or AMD:
# modprobe kvm_intel
Starting your first KVM virtual machine
To start your first KVM virtual machine, first download SysRescueCD and save it to systemrescuecd.iso. Then use the following commands, which will create a 10GB qcow disk image to use for the first disk, and then the next command will start your virtual machine, booting from the CD:
# qemu-img create -f qcow2 vdisk.qcow2 10 # qemu-system-x86_64 vdisk.qcow2 -m 1024 -cdrom systemrescuecd.iso -vnc 127.0.0.1:1 -cpu host -net nic -net user VNC server running on `127.0.0.1:5900'
Now you should be able to use a VNC client to connect to 127.0.0.1:5901 (VNC session 1) and access your virtual machine.
Above, networking will be enabled but will be on its own private LAN, and ping will not work. If you have a local bridge that you use for networking, the following steps will allow you use your existing bridge to provide higher-performance and full-featured network access to your virtual machine. Please see read Funtoo_Linux_Networking#Bridge_Configuration on how to configure a bridge
First, create your bridge and add a tap interface as a slave, make sure your user or group matches what you put in the template, Group is recomended over user for KVM.
You should have a bridge configuration something like this.
# brctl show bridge name bridge id STP enabled interfaces br0 8000.00248c52ddc9 yes eth1 tap0
To have kvm use the tap0 interface start the virtual machine as follows:
# qemu-kvm vdisk.qcow2 -m 1024 -cdrom systemrescuecd-x86-2.8.0.iso -cpu host -vnc 127.0.0.1:1 \ -net nic -net tap,ifname=tap0,script=no,downscript=no
To enable the higher performance Virto network driver use the following method
# qemu-kvm vdisk.qcow2 -m 1024 -cdrom systemrescuecd-x86-2.8.0.iso -cpu host -vnc 127.0.0.1:1 \ -net nic,model=virtio,macaddr=10:22:33:44:51:66 -net tap,ifname=tap0,script=no,downscript=no
- the macaddr portion, this is the mac address that the VM's nic will have. It must be unique across all mac addresses, otherwise you will end up with dropped packets and dmesg spam, this is set in the tap template
- This also requires virtio drivers to be installed in the guest and host
If you wanted to have VNC listen on a different IP address or port, you can use the format -vnc IP:vncnum which will cause VNC to listen on the IP specified, and the TCP port 5900+vncnum.
By default, the KVM guest will have one CPU with one core. To change this, use -cpu host (to export all of the host's CPU features) and -smp cores=X,threads=Y, where X is the number of cores, and Y is the number of threads on each core. You can emulate more CPUs and cores than you actually have.