Difference between pages "KVM" and "File:L-lvm-3.gif"

(Difference between pages)
(These variables are now automatically enabled in our profiles, no need to manually set.)
Line 1: Line 1:
== Introduction ==
KVM is a hardware-accelerated full-machine hypervisor and virtualization solution included as part of kernel 2.6.20 and later. It allows you to create and start hardware-accelerated virtual machines under Linux using the QEMU tools.
== Kernel Setup ==
To enable KVM, the following kernel config parameters should be enabled (this is based on a 3.x kernel):
Under <tt>Processor type and features</tt>, enable <tt>Linux guest support</tt>, and enable the following options:
{{kernelop|title=Processor type and features,Linux guest support|desc=
--- Linux guest support
[*]  Enable paravirtualization code
[ ]    paravirt-ops debugging (NEW)
[*]    Paravirtualization layer for spinlocks
[ ]    Xen guest support (NEW)
[*]  KVM Guest support (including kvmclock) (NEW)
[ ]    Enable debug information for KVM Guests in debugfs (NEW)
[ ]  Paravirtual steal time accounting (NEW)
Under the <tt>Virtualization</tt> category from the main kernel config menu, enable <tt>Kernel-based Virtual Machine (KVM) support</tt>, and enable at least one type of KVM, either for Intel or AMD processors. It is also recommended to enable <tt>Host kernel acceleration for virtio net</tt>.
--- Virtualization
<M>  Kernel-based Virtual Machine (KVM) support
<M>    KVM for Intel processors support
<M>    KVM for AMD processors support
[*]    KVM legacy PCI device assignment support
<M>  Host kernel accelerator for virtio net
You can use modules or build these parts directly into the kernel. Build your new kernel and modules, and reboot.
== User-space tools ==
Emerge qemu:
# ##i## emerge qemu
==Initial Setup==
Prior to using KVM, modprobe the appropriate accelerated driver for Intel or AMD:
# ##i##modprobe kvm_intel
== Starting your first KVM virtual machine ==
To start your first KVM virtual machine, first download SysRescueCD and save it to systemrescuecd.iso. Then use the following commands, which will create a 10GB qcow disk image to use for the first disk, and then the next command will start your virtual machine, booting from the CD:
# ##i##qemu-img create -f qcow2 vdisk.qcow2 10
# ##i##qemu-system-x86_64 vdisk.qcow2 -m 1024 -cdrom systemrescuecd.iso  -vnc -cpu host -net nic -net user
VNC server running on `'
Now you should be able to use a VNC client to connect to (VNC session 1) and access your virtual machine.
== Networking Options ==
Above, networking will be enabled but will be on its own private LAN, and ping will not work. If you have a local bridge that you use for networking, the following steps will allow you use your existing bridge to provide higher-performance and full-featured network access to your virtual machine.
First, create <tt>/etc/qemu-ifup</tt> and add the following to it. Replace <tt>brlan</tt> with the name of your bridge:
<syntaxhighlight lang="bash">
ifconfig $1 promisc up
brctl addif brlan $1
sleep 2
Make it executable:
# ##i##chmod +x /etc/qemu-ifup
Start the virtual machine as follows:
# ##i##qemu-system-x86_64 vdisk.qcow2 -m 1024 -cdrom systemrescuecd-x86-2.8.0.iso -cpu host -vnc -net nic -net tap,id=foo
== Tweaking KVM ==
=== VNC Output ===
If you wanted to have VNC listen on a different IP address or port, you can use the format <tt>-vnc IP:vncnum</tt> which will cause VNC to listen on the IP specified, and the TCP port 5900+vncnum.
=== CPU Settings ===
By default, the KVM guest will have one CPU with one core. To change this, use <tt>-cpu host</tt> (to export all of the host's CPU features) and <tt>-smp cores=X,threads=Y</tt>, where X is the number of cores, and Y is the number of threads on each core. You can emulate more CPUs and cores than you actually have.

Latest revision as of 21:03, January 1, 2015