Difference between pages "Python" and "Open vSwitch"

(Difference between pages)
m (Reverted edits by 37.59.80.67 (talk) to last revision by Drobbins)
 
(update)
 
Line 1: Line 1:
Python is a scripting language as well as a high-level programming language, and is used extensively by Funtoo and Gentoo Linux. [[Portage]], the ports system used by Funtoo and Gentoo Linux, is written in Python (along with bash-based ebuilds.)
+
== Open vSwitch ==
  
== Introduction ==
+
Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple physical servers similar to VMware's vNetwork distributed vswitch or Cisco's Nexus 1000V.
  
Funtoo Linux contains many enhancements related to Python, and most of these enhancements are the result of integrating the Progress Overlay into Funtoo Linux. This new functionality allows Portage to do a much, much better job of keeping Python-based packages up-to-date on your system.
+
=== Features ===
  
== Prior Issues in Gentoo/Funtoo ==
+
The current stablerelease of Open vSwitch (version 1.4.0) supports the following features:
  
With the older <tt>python.eclass</tt> in Gentoo (and that used to be in Funtoo Linux,) no usable accounting information is stored by Portage so that it could "know" what versions of Python various ebuilds were built to use. This created a problem when new versions of Python were installed, as Portage didn't have the ability to automatically update Python-based packages to work with newer versions of Python that were installed.
+
* Visibility into inter-VM communication via NetFlow, sFlow(R), SPAN, RSPAN, and GRE-tunneled mirrors
 +
* LACP (IEEE 802.1AX-2008)
 +
* Standard 802.1Q VLAN model with trunking
 +
* A subset of 802.1ag CCM link monitoring
 +
* STP (IEEE 802.1D-1998)
 +
* Fine-grained min/max rate QoS
 +
* Support for HFSC qdisc
 +
* Per VM interface traffic policing
 +
* NIC bonding with source-MAC load balancing, active backup, and L4 hashing
 +
* OpenFlow protocol support (including many extensions for virtualization)
 +
* IPv6 support
 +
* Multiple tunneling protocols (Ethernet over GRE, CAPWAP, IPsec, GRE over IPsec)
 +
* Remote configuration protocol with local python bindings
 +
* Compatibility layer for the Linux bridging code
 +
* Kernel and user-space forwarding engine options
 +
* Multi-table forwarding pipeline with flow-caching engine
 +
* Forwarding layer abstraction to ease porting to new software and hardware platforms
  
This created the need for a separate tool, called <tt>python-updater</tt,> which was used to rebuild all Python-related ebuilds so that they properly utilize the currently-installed versions of Python.
+
== Configuring Open vSwitch ==
  
== Progress in Funtoo ==
+
Open vSwitch needs to be compiled with the kernel modules (modules USE flag) for kernel versions <3.3, since 3.3.0 it is included in the kernel as a module named "Open vSwitch" and can be found in kernel at '''Networking Support -> Networking Options -> Open vSwitch'''. Then just emerge openvswitch with
  
The new Progress Overlay that has been integrated into Funtoo Linux goes a very long way towards solving the prior problems that existed when dealing with Python-based packages/ebuilds. The Progress Overlay adds two important things. The first is a new eclass that allows new Python-related ebuilds to be written in a way so that Portage can understand what versions of Python they were built against.
+
<pre>
 +
# emerge -avt openvswitch
 +
</pre>
  
By being integrated into Funtoo Linux, Progress Overlay also offers Funtoo Linux users hundreds of updated ebuilds. Hundreds (over four hundred) ebuilds that interact with Python in some way have been enhanced to take advantage of the new Python eclass. These new enhancements allow Portage to do what <tt>python-updater</tt> used to do for you -- keep your system up-to-date and all dependencies satisfied, even in the case where you may upgrade or change Python versions installed on your system.
+
== Using Open vSwitch ==
  
== How To Use It ==
+
These Configs are taken from the Open vSwitch website at http://openvswitch.org and adjusted to funtoo's needs
  
You don't need to do anything special to take advantage of the new Python functionality from Progress Overlay. It is already integrated into your Portage tree and is working for you behind the scenes to keep Python-related packages working properly.
+
=== VLANs ===
  
The new Python functionality uses a special configuration variable called <tt>PYTHON_ABIS</tt>. By default, <tt>PYTHON_ABIS</tt> is set to "<tt>2.7 3.2</tt>" in Funtoo Linux profiles. This setting tells Portage that by default, any Python-related ebuilds should be built so that they can be used with both python-2.7 and python-3.2, if the ebuilds are compatible and will run using these versions of Python. If an ebuild doesn't support python-3.2, for example, Portage will still ensure that a python-2.7-compatible version of the package is installed. If an ebuild supports ''both'' versions, then special steps will be taken to install two separate sets of python modules and binaries, in order to ensure full compatibility with either Python interpreter.
+
==== Setup ====
  
If you'd like to change <tt>PYTHON_ABIS</tt>, simply overwrite the setting in <tt>/etc/make.conf</tt> as follows:
+
* Two Physical Networks
 +
** Data Network: Eternet network for VM data traffic, which will carry VLAN tagged traffic between VMs. Your physical switch(es) must be capable of forwarding VLAN tagged traffic and the physical switch ports should be VLAN trunks (Usually this is default behavior. Configuring your physical switching hardware is beyond the scope of this document).
 +
** Management Network: This network is not strictly required, but it is a simple way to give the physical host an IP address for remote access, since an IP address cannot be assigned directly to eth0.
 +
* Two Physical Hosts
 +
Host1, Host2. Both hosts are running Open vSwitch. Each host has two NICs:
 +
** eth0 is connected to the Data Network. No IP address can be assigned on eth0
 +
** eth1 is connected to the Management Network (if necessary). eth1 has an IP address that is used to reach the physical host for management.
 +
* Four VMs
 +
VM1, VM2 run on Host1. VM3, VM4 run on Host2.
 +
Each VM has a single interface that appears as a Linux device (e.g., "tap0") on the physical host. (Note: for Xen/XenServer, VM interfaces appear as Linux devices with names like "vif1.0").
 +
[[image:2host-4vm.png]]
  
<pre>
+
==== Goal ====
PYTHON_ABIS="2.6 2.7 3.2"
+
</pre>
+
  
This is useful if you would like Portage to target other Python ABIs (like jython, for example, or 2.6) that you might be interested in.
+
Isolate VMs using VLANs on the Data Network.
The new setting above would tell Portage to also ensure that Python-related ebuilds are built against python-2.6 as well as python-2.7 and 3.2.
+
VLAN1: VM1, VM3
 +
VLAN2: VM2, VM4
  
== Resources ==
+
==== Configuration ====
 +
Perform the following configuration on Host1:
 +
# Create an OVS bridge <pre>ovs-vsctl add-br br0</pre>
 +
# Add eth0 to the bridge (by default, all OVS ports are VLAN trunks, so eth0 will pass all VLANs) <pre>ovs-vsctl add-port br0 eth0</pre>
 +
# Add VM1 as an "access port" on VLAN1 <pre>ovs-vsctl add-port br0 tap0 tag=1</pre>
 +
# Add VM2 on VLAN2 <pre>ovs-vsctl add-port br0 tap0 tag=2</pre>
 +
On Host2, repeat the same configuration to setup a bridge with eth0 as a trunk <pre>ovs-vsctl add-br br0
 +
ovs-vsctl add-port br0 eth0</pre>
 +
# Add VM3 to VLAN1 <pre>ovs-vsctl add-port br0 tap0 tag=1</pre>
 +
# Add VM4 to VLAN2 <pre>ovs-vsctl add-port br0 tap0 tag=2</pre>
  
For more information, see:
+
=== sFlow ===
 +
This will setup a VM traffic Monitor using sFlow.
 +
==== Setup ====
 +
* Two Physical Networks
 +
** Data Network: Eternet network for VM data traffic.
 +
** Management Network: This network must exist, as it is used to send sFlow data from the agent to the remote collector.
 +
* Two Physical Hosts
 +
** Host1 runs Open vSwitch and has two NICs:
 +
*** eth0 is connected to the Data Network. No IP address can be assigned on eth0.
 +
*** eth1 is connected to the Management Network. eth1 has an IP address for management traffic (including sFlow).
 +
** Monitoring Host can be any computer that run the sFlow collector. Here we use [http://www.inmon.com/products/sFlowTrend.php sFlowTrend], a free sFlow collector, a simple cross-platform Java tool. Other sFlow collectors should work equally well.
 +
*** eth0 is connected to the Management Netowrk: eth0 has an IP address that can reach Host1.
 +
* Two VMs
 +
VM1, VM2 run on Host1. Each VM has a single interface that appears as a Linux device (e.g., "tap0") on the physical host. (Note: same for Xen/XenServer as in the VLANs section.)
 +
[[image:sflow-setup.png]]
 +
 
 +
==== Goal ====
 +
Monitor traffic sent to/from VM1 and VM2 on the Data network using an sFlow collector.
 +
 
 +
==== Configuration ====
 +
Define the following configuration values in your shell environment. The default port for sFlowTrend is 6343. You will want to set your own IP address for the collector in the place of 10.0.0.1. Setting the AGENT_IP value to eth1 indicates that the sFlow agent should send traffic from eth1's IP address. The other values indicate settings regarding the frequency and type of packet sampling that sFlow should perform.
 +
<pre>
 +
# export COLLECTOR_IP=10.0.0.1
 +
# export COLLECTOR_PORT=6343
 +
# export AGENT_IP=eth1
 +
# export HEADER_BYTES=128
 +
# export SAMPLING_N=64
 +
# export POLLING_SECS=10
 +
</pre>
 +
Run the following command to create an sFlow configuration and attach it to bridge br0:
 +
<pre>
 +
ovs-vsctl -- -id=@sflow create sflow agent=${AGENT_IP}  target=\”${COLLECTOR_IP}:${COLLECTOR_PORT}\” header=${HEADER_BYTES} sampling=${SAMPLING_N} polling=${POLLING_SECS} — set bridge br0 sflow=@sflow
 +
</pre>
 +
That is all. To configure sFlow on additional bridges, just replace "br0" in the above command with a different bridge name.
 +
To remove sFlow configuration from a bridge (in this case, 'br0'), run:
 +
<pre>
 +
ovs-vsctl remove bridge br0 sflow $SFLOWUUID
 +
</pre>
 +
To see all current sets of sFlow configuration parameters, run:
 +
<pre>
 +
ovs-vsctl list sflow
 +
</pre>
  
* [[Progress Overlay Python]] - this page contains detailed developer documentation about the new Progess Overlay functionality, and how to write new-style Python-related ebuilds that take advantage of new Progress functionality.
+
=== QoS Rate-limiting ===

Latest revision as of 00:41, April 26, 2012

Open vSwitch

Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple physical servers similar to VMware's vNetwork distributed vswitch or Cisco's Nexus 1000V.

Features

The current stablerelease of Open vSwitch (version 1.4.0) supports the following features:

  • Visibility into inter-VM communication via NetFlow, sFlow(R), SPAN, RSPAN, and GRE-tunneled mirrors
  • LACP (IEEE 802.1AX-2008)
  • Standard 802.1Q VLAN model with trunking
  • A subset of 802.1ag CCM link monitoring
  • STP (IEEE 802.1D-1998)
  • Fine-grained min/max rate QoS
  • Support for HFSC qdisc
  • Per VM interface traffic policing
  • NIC bonding with source-MAC load balancing, active backup, and L4 hashing
  • OpenFlow protocol support (including many extensions for virtualization)
  • IPv6 support
  • Multiple tunneling protocols (Ethernet over GRE, CAPWAP, IPsec, GRE over IPsec)
  • Remote configuration protocol with local python bindings
  • Compatibility layer for the Linux bridging code
  • Kernel and user-space forwarding engine options
  • Multi-table forwarding pipeline with flow-caching engine
  • Forwarding layer abstraction to ease porting to new software and hardware platforms

Configuring Open vSwitch

Open vSwitch needs to be compiled with the kernel modules (modules USE flag) for kernel versions <3.3, since 3.3.0 it is included in the kernel as a module named "Open vSwitch" and can be found in kernel at Networking Support -> Networking Options -> Open vSwitch. Then just emerge openvswitch with

# emerge -avt openvswitch

Using Open vSwitch

These Configs are taken from the Open vSwitch website at http://openvswitch.org and adjusted to funtoo's needs

VLANs

Setup

  • Two Physical Networks
    • Data Network: Eternet network for VM data traffic, which will carry VLAN tagged traffic between VMs. Your physical switch(es) must be capable of forwarding VLAN tagged traffic and the physical switch ports should be VLAN trunks (Usually this is default behavior. Configuring your physical switching hardware is beyond the scope of this document).
    • Management Network: This network is not strictly required, but it is a simple way to give the physical host an IP address for remote access, since an IP address cannot be assigned directly to eth0.
  • Two Physical Hosts

Host1, Host2. Both hosts are running Open vSwitch. Each host has two NICs:

    • eth0 is connected to the Data Network. No IP address can be assigned on eth0
    • eth1 is connected to the Management Network (if necessary). eth1 has an IP address that is used to reach the physical host for management.
  • Four VMs

VM1, VM2 run on Host1. VM3, VM4 run on Host2. Each VM has a single interface that appears as a Linux device (e.g., "tap0") on the physical host. (Note: for Xen/XenServer, VM interfaces appear as Linux devices with names like "vif1.0"). 2host-4vm.png

Goal

Isolate VMs using VLANs on the Data Network. VLAN1: VM1, VM3 VLAN2: VM2, VM4

Configuration

Perform the following configuration on Host1:

  1. Create an OVS bridge
    ovs-vsctl add-br br0
  2. Add eth0 to the bridge (by default, all OVS ports are VLAN trunks, so eth0 will pass all VLANs)
    ovs-vsctl add-port br0 eth0
  3. Add VM1 as an "access port" on VLAN1
    ovs-vsctl add-port br0 tap0 tag=1
  4. Add VM2 on VLAN2
    ovs-vsctl add-port br0 tap0 tag=2
On Host2, repeat the same configuration to setup a bridge with eth0 as a trunk
ovs-vsctl add-br br0
ovs-vsctl add-port br0 eth0
  1. Add VM3 to VLAN1
    ovs-vsctl add-port br0 tap0 tag=1
  2. Add VM4 to VLAN2
    ovs-vsctl add-port br0 tap0 tag=2

sFlow

This will setup a VM traffic Monitor using sFlow.

Setup

  • Two Physical Networks
    • Data Network: Eternet network for VM data traffic.
    • Management Network: This network must exist, as it is used to send sFlow data from the agent to the remote collector.
  • Two Physical Hosts
    • Host1 runs Open vSwitch and has two NICs:
      • eth0 is connected to the Data Network. No IP address can be assigned on eth0.
      • eth1 is connected to the Management Network. eth1 has an IP address for management traffic (including sFlow).
    • Monitoring Host can be any computer that run the sFlow collector. Here we use sFlowTrend, a free sFlow collector, a simple cross-platform Java tool. Other sFlow collectors should work equally well.
      • eth0 is connected to the Management Netowrk: eth0 has an IP address that can reach Host1.
  • Two VMs

VM1, VM2 run on Host1. Each VM has a single interface that appears as a Linux device (e.g., "tap0") on the physical host. (Note: same for Xen/XenServer as in the VLANs section.) Sflow-setup.png

Goal

Monitor traffic sent to/from VM1 and VM2 on the Data network using an sFlow collector.

Configuration

Define the following configuration values in your shell environment. The default port for sFlowTrend is 6343. You will want to set your own IP address for the collector in the place of 10.0.0.1. Setting the AGENT_IP value to eth1 indicates that the sFlow agent should send traffic from eth1's IP address. The other values indicate settings regarding the frequency and type of packet sampling that sFlow should perform.

# export COLLECTOR_IP=10.0.0.1
# export COLLECTOR_PORT=6343
# export AGENT_IP=eth1
# export HEADER_BYTES=128
# export SAMPLING_N=64
# export POLLING_SECS=10

Run the following command to create an sFlow configuration and attach it to bridge br0:

ovs-vsctl -- -id=@sflow create sflow agent=${AGENT_IP}  target=\”${COLLECTOR_IP}:${COLLECTOR_PORT}\” header=${HEADER_BYTES} sampling=${SAMPLING_N} polling=${POLLING_SECS} — set bridge br0 sflow=@sflow

That is all. To configure sFlow on additional bridges, just replace "br0" in the above command with a different bridge name. To remove sFlow configuration from a bridge (in this case, 'br0'), run:

ovs-vsctl remove bridge br0 sflow $SFLOWUUID

To see all current sets of sFlow configuration parameters, run:

ovs-vsctl list sflow

QoS Rate-limiting