Difference between pages "PXE Network Windows Installation" and "Open vSwitch"

From Funtoo
(Difference between pages)
Jump to navigation Jump to search
 
(update)
 
Line 1: Line 1:
''Howto use your Funtoo machine to serve a MS Windows installation over the network''
== Open vSwitch ==
In this guide we will assume that you have followed the [[PXE network boot server]] Wiki article and have a working network/pxe boot setup. As of now this guide will cover Windows XP. Soon it will be expanded to also cover Windows 7.
==Prerequisites==
#A working Funtoo installation
#A working PXE Setup (DHCP, TFTP, PXELinux)
#app-arch/cabextract
#A legitimate copy of Microsoft Windows
#Driver for your NIC - ''Suggested to use a complete driver pack with all major supported NIC hardware for the version of Windows to be installed.''
#RIS Linux toolkit >=0.4
#A working Samba server setup


== Creating the Windows XP Image ==
Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple physical servers similar to VMware's vNetwork distributed vswitch or Cisco's Nexus 1000V.


*In the previous guide, [http://www.funtoo.org/wiki/PXE_network_boot_server PXE Network Boot Server], we used /tftproot as the working directory so we will also use it in this guide for convenience. If you chose to use a different working directory then please apply it where needed in place of the /tftproot we will be going by here.
=== Features ===


First you will need to create an ISO from your Windows XP installation disc. If you already have the ISO image you may skip this step.
The current stablerelease of Open vSwitch (version 1.4.0) supports the following features:


<console>
* Visibility into inter-VM communication via NetFlow, sFlow(R), SPAN, RSPAN, and GRE-tunneled mirrors
###i## dd if=/dev/sr0 of=/tftproot/winxp.iso
* LACP (IEEE 802.1AX-2008)
</console>
* Standard 802.1Q VLAN model with trunking
If your cdrom device isn't ''<code>/dev/sr0</code>'' please use the appropriate device in this command.
* A subset of 802.1ag CCM link monitoring
* STP (IEEE 802.1D-1998)
* Fine-grained min/max rate QoS
* Support for HFSC qdisc
* Per VM interface traffic policing
* NIC bonding with source-MAC load balancing, active backup, and L4 hashing
* OpenFlow protocol support (including many extensions for virtualization)
* IPv6 support
* Multiple tunneling protocols (Ethernet over GRE, CAPWAP, IPsec, GRE over IPsec)
* Remote configuration protocol with local python bindings
* Compatibility layer for the Linux bridging code
* Kernel and user-space forwarding engine options
* Multi-table forwarding pipeline with flow-caching engine
* Forwarding layer abstraction to ease porting to new software and hardware platforms


== Mount the ISO and Prepare Installation Sources ==
== Configuring Open vSwitch ==
Mount the image to ''<code>/tftproot/cdrom</code>'':
<console>
###i## mkdir /tftproot/cdrom; mount -o loop /tftproot/winxp.iso /tftproot/cdrom
</console>
Create the new directory for the network installation files and copy the needed files to it:
<console>
###i## mkdir /tftproot/winxp; cp -R /tftproot/cdrom/i386 /tftproot/winxp/i386
</console>
Depending on your CD/DVD copy of windows the directory name may be I386 as opposed to i386, if that is the case you will just need to change the first part of the command, keeping the new directory name i386 - this is going to be very important later on when creating the remap file!
Check the contents of your newly created i386 directory to see if the filenames are in all CAPS or if they are already in lowercase.
<console>
###i## ls /tftproot/winxp/i386
</console>
If you happen to have all UPPERCASE filenames, lets go ahead and run a script to convert it to all lowercase:
<console>
###i## cd /tftproot/winxp/i386;ls | awk '$0!=tolower($0){printf "mv \"%s\" \"%s\"\n",$0,tolower($0)}' | sh
</console>


==Extracting and Modifying the Required Boot Files ==
Open vSwitch needs to be compiled with the kernel modules (modules USE flag) for kernel versions <3.3, since 3.3.0 it is included in the kernel as a module named "Open vSwitch" and can be found in kernel at '''Networking Support -> Networking Options -> Open vSwitch'''. Then just emerge openvswitch with
Install {{Package|app-arch/cabextract}}
<console>
###i## emerge app-arch/cabextract
</console>
Extract the prepackaged drivers:
<console>
###i## cd /tftproot/winxp/i386;cabextract driver.cab
</console>
Install support for a large list of network cards:
<console>
###i## cd /tftproot/;wget http://downloads.sourceforge.net/project/bootfloppy/pxefiles.tar.gz
###i## tar zxvf pxefiles.tar.gz; cp pxefiles/drivers/* winxp/i386/
</console>
Copy the BINLSRV /INFParser tools to /tftproot:
<console>
###i## cp pxefiles/script/* /tftproot/
</console>
Extract the netboot startrom:
<console>
###i## cd /tftproot; cabextract winxp/i386/startrom.n1_
</console>
Fix the startrom for netbooting xp:
<console>
###i## sed -i -e 's/NTLDR/XPLDR/gi' startrom.n12
###i## mv startrom.n12 winxp.0
</console>
Fix XPLDR:
<console>
###i## cabextract winxp/i386/setupldr.ex_
###i## sed -i -e 's/winnt\.sif/winxp\.sif/gi' setupldr.exe
###i## sed -i -e 's/ntdetect\.com/ntdetect\.wxp/gi' setupldr.exe
###i## mv setupldr.exe xpldr
###i## cp winxp/i386/ntdetect.com ntdetect.wxp
</console>


== Creating a remapping file ==
<pre>
Create the file <code>/tftproot/tftpd.remap</code> and add the following to it:
# emerge -avt openvswitch
{{File
</pre>
|/tftproot/tftpd.remap|<pre>
ri ^[az]: # Remove “drive letters”
rg \\ / # Convert backslashes to slashes
rg \# @ # Convert hash marks to @ signs
rg /../ /..no../ # Convert /../ to /..no../
rg A a
rg B b
rg C c
rg D d
rg E e
rg F f
rg G g
rg H h
rg I i
rg J j
rg K k
rg L l
rg M m
rg N n
rg O o
rg P p
rg Q q
rg R r
rg S s
rg T t
rg U u
rg V v
rg W w
rg X x
rg Y y
rg Z z
r ^/(.*) \1
r ^xpldr xpldr
r ^ntdetect.wxp ntdetect.wxp
r ^winxp.sif winxp.sif
</pre>}}


==Install/Configure Samba ==
== Using Open vSwitch ==
If you don't already have {{Package|net-fs/samba}} installed, then:
<console>
###i## emerge -av net-fs/samba
</console>
Create a Samba share for your tftp server in <code>/etc/samba/smb.conf</code>


{{Note}} Be sure you have the other required samba settings configured in the file
These Configs are taken from the Open vSwitch website at http://openvswitch.org and adjusted to funtoo's needs
{{File
|/etc/samba/smb.conf|<pre>
[Global]
interfaces = lo eth0 wlan0
bind interfaces only = yes
workgroup = WORKGROUP
security = user


[tftproot]
=== VLANs ===
path = /tftproot
browsable = true
read only = yes
writable = no
guest ok = yes
</pre>}}
Start Samba:
<console>
###i## /etc/init.d/samba start
</console> 
or if samba has already been started:
<console>
###i## /etc/init.d/samba restart
</console>


== Creating a Setup Instruction File ==
==== Setup ====
Create the file <code>/tftproot/winxp.sif</code> and add the following, replacing <tt>SAMBA_SERVER_IP</tt> with the local IP address of your samba server:
{{File
|/tftproot/winxp.sif|<pre>
[data]
floppyless = "1"
msdosinitiated = "1"
; Needed for second stage
OriSrc = "\\SAMBA_SERVER_IP\tftproot\winxp\i386"
OriTyp = "4"
LocalSourceOnCD = 1
DisableAdminAccountOnDomainJoin = 1


[SetupData]
* Two Physical Networks
OsLoadOptions = "/fastdetect"
** Data Network: Eternet network for VM data traffic, which will carry VLAN tagged traffic between VMs. Your physical switch(es) must be capable of forwarding VLAN tagged traffic and the physical switch ports should be VLAN trunks (Usually this is default behavior. Configuring your physical switching hardware is beyond the scope of this document).
; Needed for first stage
** Management Network: This network is not strictly required, but it is a simple way to give the physical host an IP address for remote access, since an IP address cannot be assigned directly to eth0.
SetupSourceDevice = "\Device\LanmanRedirector\SAMBA_SERVER_IP\tftproot\winxp"
* Two Physical Hosts
Host1, Host2. Both hosts are running Open vSwitch. Each host has two NICs:
** eth0 is connected to the Data Network. No IP address can be assigned on eth0
** eth1 is connected to the Management Network (if necessary). eth1 has an IP address that is used to reach the physical host for management.
* Four VMs
VM1, VM2 run on Host1. VM3, VM4 run on Host2.
Each VM has a single interface that appears as a Linux device (e.g., "tap0") on the physical host. (Note: for Xen/XenServer, VM interfaces appear as Linux devices with names like "vif1.0").
[[image:2host-4vm.png]]


[UserData]
==== Goal ====
ComputerName = *
</pre>}}


== Editing the pxelinux.cfg/default boot menu ==
Isolate VMs using VLANs on the Data Network.
Edit your boot menu so that it contains the following entry:
VLAN1: VM1, VM3
<console>
VLAN2: VM2, VM4
LABEL WinXP
MENU LABEL Install MS Windows XP
KERNEL winxp.0
</console>


== Re-Start all required daemons ==
==== Configuration ====
If the daemon isn't already running use start instead or restart in the following commands
Perform the following configuration on Host1:
<console>
# Create an OVS bridge <pre>ovs-vsctl add-br br0</pre>
###i## /etc/init.d/dnsmasq restart
# Add eth0 to the bridge (by default, all OVS ports are VLAN trunks, so eth0 will pass all VLANs) <pre>ovs-vsctl add-port br0 eth0</pre>
###i## /etc/init.d/in.tftpd restart
# Add VM1 as an "access port" on VLAN1 <pre>ovs-vsctl add-port br0 tap0 tag=1</pre>
</console>
# Add VM2 on VLAN2 <pre>ovs-vsctl add-port br0 tap0 tag=2</pre>
On Host2, repeat the same configuration to setup a bridge with eth0 as a trunk <pre>ovs-vsctl add-br br0
ovs-vsctl add-port br0 eth0</pre>
# Add VM3 to VLAN1 <pre>ovs-vsctl add-port br0 tap0 tag=1</pre>
# Add VM4 to VLAN2 <pre>ovs-vsctl add-port br0 tap0 tag=2</pre>


== Modify Binlsrv, update driver cache, and start driver hosting service ==
=== sFlow ===
Change the BASEPATH= variable at or around line #62 of ''<code>binlsrv.py</code>'' so that it is:
This will setup a VM traffic Monitor using sFlow.
{{File
==== Setup ====
|binlsrv.py|<pre>
* Two Physical Networks
BASEPATH='/tftproot/winxp/i386/'
** Data Network: Eternet network for VM data traffic.
</pre>}}
** Management Network: This network must exist, as it is used to send sFlow data from the agent to the remote collector.
Generate driver cache:
* Two Physical Hosts
<console>
** Host1 runs Open vSwitch and has two NICs:
###i## cd /tftproot;./infparser.py winxp/i386/
*** eth0 is connected to the Data Network. No IP address can be assigned on eth0.
</console>
*** eth1 is connected to the Management Network. eth1 has an IP address for management traffic (including sFlow).
Start binlservice:
** Monitoring Host can be any computer that run the sFlow collector. Here we use [http://www.inmon.com/products/sFlowTrend.php sFlowTrend], a free sFlow collector, a simple cross-platform Java tool. Other sFlow collectors should work equally well.
<console>
*** eth0 is connected to the Management Netowrk: eth0 has an IP address that can reach Host1.
###i## ./binlsrv.py
* Two VMs
</console>
VM1, VM2 run on Host1. Each VM has a single interface that appears as a Linux device (e.g., "tap0") on the physical host. (Note: same for Xen/XenServer as in the VLANs section.)
[[image:sflow-setup.png]]


== Booting the client ==  
==== Goal ====
If all is well, you should be able to boot the client choosing to ''boot from network'' in the boot options, you should get to your PXELinux bootloader, and see the Install Windows XP option after pressing enter you *should* kick off your XP installation via network!! Congratulations!
Monitor traffic sent to/from VM1 and VM2 on the Data network using an sFlow collector.


[[Category:HOWTO]]
==== Configuration ====
Define the following configuration values in your shell environment. The default port for sFlowTrend is 6343. You will want to set your own IP address for the collector in the place of 10.0.0.1. Setting the AGENT_IP value to eth1 indicates that the sFlow agent should send traffic from eth1's IP address. The other values indicate settings regarding the frequency and type of packet sampling that sFlow should perform.
<pre>
# export COLLECTOR_IP=10.0.0.1
# export COLLECTOR_PORT=6343
# export AGENT_IP=eth1
# export HEADER_BYTES=128
# export SAMPLING_N=64
# export POLLING_SECS=10
</pre>
Run the following command to create an sFlow configuration and attach it to bridge br0:
<pre>
ovs-vsctl -- -id=@sflow create sflow agent=${AGENT_IP}  target=\”${COLLECTOR_IP}:${COLLECTOR_PORT}\” header=${HEADER_BYTES} sampling=${SAMPLING_N} polling=${POLLING_SECS} — set bridge br0 sflow=@sflow
</pre>
That is all. To configure sFlow on additional bridges, just replace "br0" in the above command with a different bridge name.
To remove sFlow configuration from a bridge (in this case, 'br0'), run:
<pre>
ovs-vsctl remove bridge br0 sflow $SFLOWUUID
</pre>
To see all current sets of sFlow configuration parameters, run:
<pre>
ovs-vsctl list sflow
</pre>
 
=== QoS Rate-limiting ===

Latest revision as of 00:41, April 26, 2012

Open vSwitch

Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache 2.0 license. It is designed to enable massive network automation through programmatic extension, while still supporting standard management interfaces and protocols (e.g. NetFlow, sFlow, SPAN, RSPAN, CLI, LACP, 802.1ag). In addition, it is designed to support distribution across multiple physical servers similar to VMware's vNetwork distributed vswitch or Cisco's Nexus 1000V.

Features

The current stablerelease of Open vSwitch (version 1.4.0) supports the following features:

  • Visibility into inter-VM communication via NetFlow, sFlow(R), SPAN, RSPAN, and GRE-tunneled mirrors
  • LACP (IEEE 802.1AX-2008)
  • Standard 802.1Q VLAN model with trunking
  • A subset of 802.1ag CCM link monitoring
  • STP (IEEE 802.1D-1998)
  • Fine-grained min/max rate QoS
  • Support for HFSC qdisc
  • Per VM interface traffic policing
  • NIC bonding with source-MAC load balancing, active backup, and L4 hashing
  • OpenFlow protocol support (including many extensions for virtualization)
  • IPv6 support
  • Multiple tunneling protocols (Ethernet over GRE, CAPWAP, IPsec, GRE over IPsec)
  • Remote configuration protocol with local python bindings
  • Compatibility layer for the Linux bridging code
  • Kernel and user-space forwarding engine options
  • Multi-table forwarding pipeline with flow-caching engine
  • Forwarding layer abstraction to ease porting to new software and hardware platforms

Configuring Open vSwitch

Open vSwitch needs to be compiled with the kernel modules (modules USE flag) for kernel versions <3.3, since 3.3.0 it is included in the kernel as a module named "Open vSwitch" and can be found in kernel at Networking Support -> Networking Options -> Open vSwitch. Then just emerge openvswitch with

# emerge -avt openvswitch

Using Open vSwitch

These Configs are taken from the Open vSwitch website at http://openvswitch.org and adjusted to funtoo's needs

VLANs

Setup

  • Two Physical Networks
    • Data Network: Eternet network for VM data traffic, which will carry VLAN tagged traffic between VMs. Your physical switch(es) must be capable of forwarding VLAN tagged traffic and the physical switch ports should be VLAN trunks (Usually this is default behavior. Configuring your physical switching hardware is beyond the scope of this document).
    • Management Network: This network is not strictly required, but it is a simple way to give the physical host an IP address for remote access, since an IP address cannot be assigned directly to eth0.
  • Two Physical Hosts

Host1, Host2. Both hosts are running Open vSwitch. Each host has two NICs:

    • eth0 is connected to the Data Network. No IP address can be assigned on eth0
    • eth1 is connected to the Management Network (if necessary). eth1 has an IP address that is used to reach the physical host for management.
  • Four VMs

VM1, VM2 run on Host1. VM3, VM4 run on Host2. Each VM has a single interface that appears as a Linux device (e.g., "tap0") on the physical host. (Note: for Xen/XenServer, VM interfaces appear as Linux devices with names like "vif1.0"). 2host-4vm.png

Goal

Isolate VMs using VLANs on the Data Network. VLAN1: VM1, VM3 VLAN2: VM2, VM4

Configuration

Perform the following configuration on Host1:

  1. Create an OVS bridge
    ovs-vsctl add-br br0
  2. Add eth0 to the bridge (by default, all OVS ports are VLAN trunks, so eth0 will pass all VLANs)
    ovs-vsctl add-port br0 eth0
  3. Add VM1 as an "access port" on VLAN1
    ovs-vsctl add-port br0 tap0 tag=1
  4. Add VM2 on VLAN2
    ovs-vsctl add-port br0 tap0 tag=2

On Host2, repeat the same configuration to setup a bridge with eth0 as a trunk

ovs-vsctl add-br br0
ovs-vsctl add-port br0 eth0
  1. Add VM3 to VLAN1
    ovs-vsctl add-port br0 tap0 tag=1
  2. Add VM4 to VLAN2
    ovs-vsctl add-port br0 tap0 tag=2

sFlow

This will setup a VM traffic Monitor using sFlow.

Setup

  • Two Physical Networks
    • Data Network: Eternet network for VM data traffic.
    • Management Network: This network must exist, as it is used to send sFlow data from the agent to the remote collector.
  • Two Physical Hosts
    • Host1 runs Open vSwitch and has two NICs:
      • eth0 is connected to the Data Network. No IP address can be assigned on eth0.
      • eth1 is connected to the Management Network. eth1 has an IP address for management traffic (including sFlow).
    • Monitoring Host can be any computer that run the sFlow collector. Here we use sFlowTrend, a free sFlow collector, a simple cross-platform Java tool. Other sFlow collectors should work equally well.
      • eth0 is connected to the Management Netowrk: eth0 has an IP address that can reach Host1.
  • Two VMs

VM1, VM2 run on Host1. Each VM has a single interface that appears as a Linux device (e.g., "tap0") on the physical host. (Note: same for Xen/XenServer as in the VLANs section.) Sflow-setup.png

Goal

Monitor traffic sent to/from VM1 and VM2 on the Data network using an sFlow collector.

Configuration

Define the following configuration values in your shell environment. The default port for sFlowTrend is 6343. You will want to set your own IP address for the collector in the place of 10.0.0.1. Setting the AGENT_IP value to eth1 indicates that the sFlow agent should send traffic from eth1's IP address. The other values indicate settings regarding the frequency and type of packet sampling that sFlow should perform.

# export COLLECTOR_IP=10.0.0.1
# export COLLECTOR_PORT=6343
# export AGENT_IP=eth1
# export HEADER_BYTES=128
# export SAMPLING_N=64
# export POLLING_SECS=10

Run the following command to create an sFlow configuration and attach it to bridge br0:

ovs-vsctl -- -id=@sflow create sflow agent=${AGENT_IP}  target=\”${COLLECTOR_IP}:${COLLECTOR_PORT}\” header=${HEADER_BYTES} sampling=${SAMPLING_N} polling=${POLLING_SECS} — set bridge br0 sflow=@sflow

That is all. To configure sFlow on additional bridges, just replace "br0" in the above command with a different bridge name. To remove sFlow configuration from a bridge (in this case, 'br0'), run:

ovs-vsctl remove bridge br0 sflow $SFLOWUUID

To see all current sets of sFlow configuration parameters, run:

ovs-vsctl list sflow

QoS Rate-limiting