Difference between pages "OpenVZ/pt-br" and "Translations:Funtoo:Metro/122/en"

From Funtoo
< OpenVZ(Difference between pages)
Jump to navigation Jump to search
 
(Importing a new version from external source)
 
Line 1: Line 1:
== Apresentação ==
On my AMD Jaguar build server, on Feb 20, 2015, this lists all the builds that {{c|buildrepo}} has been configured to manage. The first number on each line is a '''failcount''', which is the number of consecutive times that the build has failed. A zero value indicates that everything's okay. The failcount is an important feature of the advanced repository management features. Here are a number of behaviors that are implemented based on failcount:
 
OpenVZ (veja [http://wiki.openvz.org wiki.openvz.org]) é uma solução de virtualização de servidor a nível do sistema operacional (OS-level server virtualization solution),  
construído no Linux. OpenVZ permite a criação de containers Linux virtuais isolados e seguros (chamados de "VE"s) em um único servidor físico. Cada container possui sua própria atualização local, power state, interfaces de rede, limites de recurso e porção limitada de filesystem hospede. OpenVZ é com frequência descrito como "chroot on steroids."
 
Funtoo possui suporte ao OpenVZ das seguintes maneiras:
 
* Construção dos templates do OpenVZ utilizando o [[Metro]], nossa ferramenta de construção da distribuição.
* Melhoria do <tt>vzctl</tt>, ao desenvolver uma versão melhorada/patched hospedada em [http://www.github.com/funtoo/vzctl GitHub].
* Integração de suporte [[Funtoo Linux Networking]]  ao vzctl (esses patches tem sido aceitos pelo projeto OpenVZ.)
* Melhoria dos scripts de startup vzctl para fazer coisas como inicializar corretamente o veth e o vzeventd.
* Integrar patches adicionais nos ebuilds do openvz-rhel6-stable e do openvz-rhel5-stable para assegurar funcionalidade em qualidade de produção (production-quality) do OpenVZ.
* Manter compatibilidade com os kernels de produção OpenVZ do RHEL5-based, tão bem quanto instruções em como obter instalação do Funtoo Linux para esses kernels em nosso [[RHEL5 Kernel HOWTO]]. (Note: kernel openvz-rhel6-kernel RHEL6-based é agora o recomendado para deploying OpenVZ.)
 
Em adição, Daniel está atualmente empregado na [http://www.zenoss.com Zenoss] e é o autor e mantenedor do [http://community.zenoss.org/blogs/zenossblog/2012/01/24/openvz-and-zenoss Zenoss OpenVZ ZenPack] ([https://github.com/zenoss/ZenPacks.zenoss.OpenVZ GitHub link])
 
== Versões Recomendadas ==
 
Para instalação do OpenVZ no Funtoo Linux de modo que você pode criar containers Linux-based, uma versão <tt>x86-64bit</tt> do Funtoo Linux é fortemente recomendada. O ebuild <tt>openvz-rhel6-stable</tt> é o kernel recomendado a utilizar. Se você emergir esse kernel com a flag USE <tt>binary</tt> habilitado, ele construirá um kernel binário e initrd utilizando a configuração padrão Red Hat que dever inicializar quase todo o hardware. Depois de emergir, você precisara editar o <tt>/etc/boot.conf</tt>, executar o [[boot-update]], e reiniciar no novo kernel OpenVZ.
 
{{fancywarning|1=
Por favor, utilize o ext4 exclusivamente quando implantar um OpenVZ host. A equipe de desenvolvimentos paralelos testa extensivamente com o ext4, e versões modernas do <tt>openvz-rhel6-stable</tt> '''não'''são compatíveis com o XFS, e você pode ter experiencias com kernel bugs.
}}
 
Alternativamente, você poderia emergir <tt>openvz-rhel5-stable</tt> como a flag USE <tt>binary</tt> habilitada para utilizar o kernel OpenVZ RHEL5-based mais antigo. Isso querer passo adicionais que foram cobertos no [[RHEL5 Kernel HOWTO]].
 
Você precisará emergir <tt>vzctl</tt>, que são as ferramentas userspace do OpenVZ.
== Configuração ==
 
Depois de inicializar (booting) em um kernel OpenVZ-enabled, o OpenVZ pode ser habilitado como a seguir:
 
<console>
# ##i##emerge vzctl
# ##i##rc-update add vz default
# ##i##rc
</console>
 
== Templates OpenVZ Funtoo Linux ==
 
O diretório stage do Funtoo Linux também contem os templates OpenVZ Funtoo Linux  no diretório openvz/. Esse podem ser utilizados como a seguir:
 
<console>
# ##i##cd /vz/template/cache
# ##i##wget http://ftp.osuosl.org/pub/funtoo/funtoo-current/openvz/x86-64bit/funtoo-openvz-core2_64-funtoo-current-2011-12-31.tar.xz
# ##i##vzctl create 100 --ostemplate funtoo-openvz-core2_64-funtoo-current-2011-12-31
Creating container private area (funto-openvz-core2-2010.11.06)
Performing postcreate actions
Container private area was created
</console>
 
Se você não estiver utilizando o Funtoo Linux, você poderá precisar converter o template de  .xz para um template .gz para isso funcione.
 
== Limites de Recursos ==
 
Se você não precisa ter quaisquer limites de recurso para o VE, então em um Funtoo Linux host, ele podem ser habilitados como a seguir:
 
<console>
ninja1 ~ # ##i##vzctl set 100 --applyconfig unlimited --save
</console>
 
== Starting the Container ==
 
Here's how to start the container:
 
<console>
ninja1 ~ # ##i##vzctl start 100
Starting container ...
Container is mounted
Setting CPU units: 1000
Container start in progress...
ninja1 ~ #
</console>
== Networking ==
 
=== veth networking ===
 
OpenVZ has two types of networking. The first is called "veth", which provides the VE with a virtual ethernet interface. This allows the VE to do things like broadcasting and multicasting, which means that DHCP can be used. The best way to set up veth networking is to use a bridge on the physical host machine. For the purposes of this example, we'll assume your server has a wired eth0 interface that provides Internet connectivity - it does not need to have an IP address. To configure a bridge, we will create a network interface called "br0", a bridge device, and assign your static ip to br0 rather than eth0. Then, we will configure eth0 to come up, but without an IP, and add it as a "slave" of bridge br0. Once br0 is configured, we can add other network interfaces (each configured to use a unique static IP address) as slaves of bridge br0, and these devices will be able to communicate out over your Ethernet link.
 
Let's see how this works.
 
==== Network - Before ====
 
Before the bridge is configured, we probably have an <tt>/etc/conf.d/netif.eth0</tt> that looks like this:
 
<pre>
template="interface"
ipaddr="10.0.1.200/24"
gateway="10.0.1.1"
nameservers="10.0.1.1"
domain="funtoo.org"
</pre>
 
==== Network - After ====
 
To get the bridge-based network configured, first connect to a physical terminal or management console, as eth0 will be going down
for a bit as we make these changes.
 
We are now going to set up a bridge with eth0's IP address, and add eth0 to the bridge with no IP. Then we can throw container interfaces into the bridge and then can all communicate out using eth0.
 
We will <tt>mv netif.eth0 netif.br0</tt>, and then edit the file so it looks like this (first line modified, new line added at end):
 
<pre>
template="bridge"
ipaddr="10.0.1.200/24"
gateway="10.0.1.1"
nameservers="10.0.1.1"
domain="funtoo.org"
slaves="netif.eth0"
</pre>
 
If you want to bridge the wlan0 device, you'll need the additional wpa_supplicant flag '''''-b br0''.'''
In most cases for wlan0 it is much better to use a route:
<console>
# ##i##iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o wlan0 -j SNAT your_host_ip_address
</console>
 
Now, time to create a new <tt>/etc/conf.d/netif.eth0</tt>, but this time we won't associate an IP address with it. Config file looks like this, a single line:
 
<pre>
template="interface-noip"
</pre>
 
Now, we need to create a necessary symlink in /etc/init.d and get our bridge added to the default runlevel:
 
<console>
# ##i##cd /etc/init.d
# ##i##ln -s netif.tmpl netif.br0
# ##i##rc-update add netif.br0 default
</console>
 
Now, let's enable our new network interfaces:
 
<console>
# ##i##/etc/init.d/netif.eth0 stop
# ##i##rc
</console>
 
The result of these changes is that you now have initscripts to create a "br0" interface (with static IP), with "eth0" as its slave (with no IP). Networking should still work as before, but now you are ready to provide bridged connectivity to your virtual containers since you can add their "veth" interfaces to "br0" and they will be bridged to your existing network.
 
==== Using The Bridge ====
 
To add a veth "eth0" interface to your VE, type the following:
 
<console>
# ##i##vzctl stop 100
# ##i##vzctl set 100 --netif_add eth0,,,,br0 --save
# ##i##vzctl start 100
</console>
 
Once the VE is started, the network interface inside the VE will be called "eth0", and the network interface on the host system will be named "veth100.0". Because we specified "br0" after the 4 commas, vzctl will automatically add our new "veth100.0" interface to bridge br0 for us. We can see this by typing "brctl show" after we have started the VE by typing "vzctl start 100".
 
<console>
# ##i##brctl show
bridge name    bridge id              STP enabled    interfaces
br0            8000.0026b92c72f5      no              eth0
                                                        veth100.0
</console>
 
==== VE Configuration ====
 
You will also need to manually configure the VE to acquire/use a valid IP address - DHCP or static assignment will both work; typically, this is done by starting the VE with "vzctl start 100" and then typing "vzctl enter 100", which will give you a root shell inside the VE. Then, once you have configured the network, you can ensure that the VE is accessible remotely via SSH. Note that once inside the VE (with "vzctl enter 100"), you configure the VE's network interface as you would on a regular Linux distribution - the VE will be bridged into your LAN, so it can talk to your DHCP server, and can use an IP address that it acquires via DHCP or it can use a static address.
 
=== venet networking ===
 
"venet" is OpenVZ's other form of host networking. It can be easier to configure than veth, but does not allow the use of broadcast or multicast, so DHCP is not possible on the VE side. For this reason, an IP address must be statically assigned to the VE, as follows:
 
<console>
# ##i##vzctl set 100 --ipadd 10.0.1.201 --save
# ##i##vzctl set 100 --nameserver 8.8.4.4 --save #google public DNS server
# ##i##vzctl set 100 --hostname foobar --save
</console>
 
With venet configuration, some additional steps are required in case of PPPoE Internet connection. We will use iptables to get network working in all VE's.
 
<console># ##i##echo 1 > /proc/sys/net/ipv4/ip_forward</console>
 
or, alternatively set it in /etc/sysctl.conf to have ip forward at boot
 
<console># ##i##echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
# ##i##sysctl -p</console>
 
Add an iptables rule, save and start the firewall:
<console># ##i##iptables -t nat -A POSTROUTING -o ppp0 (or your desired interface) -j MASQUERADE
# ##i##/etc/init.d/iptables save
# ##i##rc-update add iptables default
# ##i##rc</console>
All VE's now have a network connection from HN.
 
When using venet, OpenVZ will handle the process of ensuring the VE has its network properly configured at boot. As of vzctl-3.0.24.2-r4 in Funtoo Linux, Funtoo Linux VEs should be properly auto-configured when using venet.
 
With venet, there is no need to add any interfaces to a bridge - OpenVZ treats venet interfaces as virtual point-to-point interfaces so that traffic is automatically routed properly from the VE to the host system, out the default route of the host system if necessary.
 
[[Category:Virtualization]]

Revision as of 17:31, July 12, 2015

On my AMD Jaguar build server, on Feb 20, 2015, this lists all the builds that buildrepo has been configured to manage. The first number on each line is a failcount, which is the number of consecutive times that the build has failed. A zero value indicates that everything's okay. The failcount is an important feature of the advanced repository management features. Here are a number of behaviors that are implemented based on failcount: