Difference between pages "Traffic Control" and "News:New OpenGL management in Funtoo"

From Funtoo
(Difference between pages)
Jump to navigation Jump to search
 
(explanation of the new system)
 
Line 1: Line 1:
== Introduction ==
{{News
|Summary=Funtoo is switching to an improved system for managing multiple OpenGL providers (Mesa/Xorg, AMD and nVidia). The update may involve blockers and file collisions.
|News Format=Extended
|News Category=Packages
|Author=Mgorny
|Publication Status=Draft
|Publication Date=2015/02/28
}}
== New OpenGL management ==
=== System principles ===
The new OpenGL management design assumes that the reference OpenGL implementation (mesa/Xorg) is to be used to build packages. After switching to the new system, all packages will use the mesa/Xorg headers and link to the mesa/Xorg libraries. This improves portability of software built on Funtoo and solves some of the build failures when non-standard OpenGL provider was enabled.


Linux's traffic control functionality offers a lot of capabilities related to influencing the rate of flow, as well as latency, of primarily outgoing but also in some cases incoming network traffic. It is designed to be a "construction kit" rather than a turn-key system, where complex network traffic policing and shaping decisions can be made using a variety of algorithms. The Linux traffic control code is also often used by academia for research purposes, where is it can be a useful mechanism to simulate and explore the impact of a variety of different network behaviors. See [http://www.linuxfoundation.org/collaborate/workgroups/networking/netem netem] for an example of a simulation framework that can be used for this purpose.  
The third-party OpenGL libraries and modules provided by proprietary driver vendors can be enabled for run-time program use. They will not affect how the program is built. However, they will be loaded by the dynamic loader when starting executables. The Xorg server will also load the modules provided by blob driver vendor if appropriate.


Of course, Linux traffic control can also be extremely useful in an IT context, and this document is intended to focus on the practical, useful applications of Linux traffic control, where these capabilities can be applied to solve problems that are often experienced on modern networks.
=== Implementation ===
The reference implementation (mesa/Xorg) packages install headers and libraries into standard system locations (/usr/include, /usr/lib*). The compiler and linker finds them using the usual rules and uses them.


== Incoming and Outgoing Traffic ==
The third-party OpenGL vendors install libraries and server extension modules into vendor-named subdirectories of /usr/lib*/opengl. Those files are not used directly.


One common use of Linux traffic control is to configure a Linux system as a Linux router or bridge, so that the Linux system sits between two networks, or between the "inside" of the network and the real router, so that it can shape traffic going to local machines as well as out to the Internet. This provides a way to prioritize, shape and police both incoming (from the Internet) and outgoing (from local machines) network traffic, because it is easiest to create traffic control rules for traffic flowing ''out'' of an interface, since we can control when the system ''sends'' data, but controlling when we ''receive'' data requires an additional ''intermediate queue'' to be created to buffer incoming data. When a Linux system is configured as a firewall or router with a physical interface for each part of the network, we can avoid using intermediate queues.
{{Package|app-admin/eselect-opengl}} is used to select OpenGL implementation used at run-time. The choice of implementation is controlled via dynamic linker configuration (ld.so.conf) and Xorg server configuration. If the reference implementation is selected, the eselect module outputs null configuration that causes the linker and server to use the standard paths. If an another implementation is selected, the configuration prepends /usr/lib*/opengl paths to linker and server configuration, causing them to prefer the third-party libraries over reference.
 
{{NewsFooter}}
A simple way to set up a layer 2 bridge using Linux involves creating a bridge device with <tt>brctl</tt>, adding two Ethernet ports to this bridge (again using <tt>brctl</tt>), and then apply prioritization, shaping and policing rules to both interfaces. The rules will apply to ''outgoing'' traffic on each interface. One physical interface will be connected to an upstream router on the same network, while the other network port will be connected to a layer 2 access switch to which local machines are connected. This allows powerful egress shaping policies to be created on both interfaces, to control the flows in and out of the network.
 
== Recommended Resources ==
 
Resources you should take a look at, in order:
 
* [http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm HTB documentation] by Martin Devera. Best way to create different priority classes and bandwidth allocations.
* [http://www.opalsoft.net/qos/DS.htm Differentiated Services On Linux HOWTO] by Leonardo Balliache. Good general docs.
* [http://blog.edseek.com/~jasonb/articles/traffic_shaping/index.html A Practical Guide to Linux Traffic Control] by Jason Boxman. Good general docs.
* [http://www.linuxfoundation.org/collaborate/workgroups/networking/ifb IFB - replacement for Linux IMQ], with examples. This is the official best way to do ''inbound'' traffic control, when you don't have dedicated in/out interfaces.
* [http://seclists.org/fulldisclosure/2006/Feb/702 Use of iptables hashlimit] - Great functionality in iptables. There's a hashlimit example below as well.
 
Related Interesting Links:
 
* [http://wiki.secondlife.com/wiki/BLT Second Life Bandwidth Testing Protocol] - example of Netem
* [http://www.29west.com/docs/THPM/udp-buffer-sizing.html UDP Buffer Sizing], part of [http://www.29west.com/docs/THPM/index.html Topics in High Performance Messaging]
 
== Recommended Approaches ==
 
Daniel Robbins has had very good results with the [http://luxik.cdi.cz/~devik/qos/htb/ HTB queuing discipline] - it has very good features, and also has [http://luxik.cdi.cz/~devik/qos/htb/manual/userg.htm very good documentation], which is just as important, and is designed to deliver useful results in a production environment. And it works. If you use traffic control under Funtoo Linux, please use the HTB queuing discipline as the root queuing discipline because you will get good results in very little time. Avoid using any other queuing discipline under Funtoo Linux as the ''root'' queuing discipline on any interface. If you are creating a tree of classes and qdiscs, HTB should be at the top, and you should avoid hanging classes under any other qdisc unless you have plenty of time to experiment and verify that your QoS rules are working as expected. Please see [[#State_of_the_Code|State of the Code]] for more info on what Daniel Robbins considers to be the current state of the traffic control implementation in Linux.
 
== State of the Code ==
 
If you are using enterprise kernels, especially any RHEL5-based kernels, you must be aware that the traffic control code in these kernels is about 5 years old and contains many significant bugs. In general, it is possible to avoid these bugs by using HTB as your root queueing discipline and testing things carefully to ensure that you are getting the proper behavior. The <tt>prio</tt> queueing discipline is known to not work reliably in RHEL5 kernels. See [[Broken Traffic Control]] for more information on known bugs with older kernels.
 
If you are using a more modern kernel, Linux traffic control should be fairly robust. The examples below should work with RHEL5 as well as newer kernels.
 
== Inspect Your Rules ==
 
If you are implementing Linux traffic control, you should be running these commands frequently to monitor the behavior of your queuing discipline. Replace <tt>$wanif</tt> with the actual network interface name.
 
<source lang="bash">
tc -s qdisc ls dev $wanif
tc -s class ls dev $wanif
</source>
 
== Matching ==
 
Here are some examples you can use as the basis for your own filters/classifiers:
 
# <tt>protocol arp u32 match u32 0 0</tt> - match ARP packets
# <tt>protocol ip u32 match ip protocol 0x11 0xff</tt> - match UDP packets
# <tt>protocol ip u32 match ip protocol 17 0xff</tt> - (also) match UDP packets
# <tt>protocol ip u32 match ip protocol 0x6 0xff</tt> - match TCP packets
# <tt>protocol ip u32 match ip protocol 1 0xff</tt> - match ICMP (ping) packets
# <tt>protocol ip u32 match ip dst 4.3.2.1/32</tt> - match all IP traffic headed for IP 4.3.2.1
# <tt>protocol ip u32 match ip src 4.3.2.1/32 match ip sport 80 0xffff</tt> - match all IP traffic from 4.3.2.1 port 80
# <tt>protocol ip u32 match ip sport 53 0xffff</tt> - match originating DNS (both TCP and UDP)
# <tt>protocol ip u32 match ip dport 53 0xffff</tt> - match response DNS (both TCP and UDP)
# <tt>protocol ip u32 match ip protocol 6 0xff match u8 0x10 0xff at nexthdr+13</tt> - match packets with ACK bit set
# <tt>protocol ip u32 match ip protocol 6 0xff match u8 0x10 0xff at nexthdr+13 match u16 0x0000 0xffc0 at 2</tt> - packets less than 64 bytes in size with ACK bit set
# <tt>protocol ip u32 match ip tos 0x10 0xff</tt> - match IP packets with "type of service" set to "Minimize delay"/"Interactive"
# <tt>protocol ip u32 match ip tos 0x08 0xff</tt> - match IP packets with "type of service" set to "Maximize throughput"/"Bulk" (see "QDISC PARAMETERS" in <tt>tc-prio</tt> man page)
# <tt>protocol ip u32 match tcp dport 53 0xffff match ip protocol 0x6 0xff</tt> - match TCP packets heading for dest. port 53 (my not work)
 
== Sample Traffic Control Code ==
 
<source lang="bash">
modemif=eth4
 
iptables -t mangle -A POSTROUTING -o $modemif -p tcp -m tos --tos Minimize-Delay -j CLASSIFY --set-class 1:10
iptables -t mangle -A POSTROUTING -o $modemif -p tcp --dport 53 -j CLASSIFY --set-class 1:10
iptables -t mangle -A POSTROUTING -o $modemif -p tcp --dport 80 -j CLASSIFY --set-class 1:10
iptables -t mangle -A POSTROUTING -o $modemif -p tcp --dport 443 -j CLASSIFY --set-class 1:10
 
tc qdisc add dev $modemif root handle 1: htb default 12
tc class add dev $modemif parent 1: classid 1:1 htb rate 1500kbit ceil 1500kbit burst 10k
tc class add dev $modemif parent 1:1 classid 1:10 htb rate 700kbit ceil 1500kbit prio 1 burst 10k
tc class add dev $modemif parent 1:1 classid 1:12 htb rate 800kbit ceil 800kbit prio 2
tc filter add dev $modemif protocol ip parent 1:0 prio 1 u32 match ip protocol 0x11 0xff flowid 1:10
tc qdisc add dev $modemif parent 1:10 handle 20: sfq perturb 10
tc qdisc add dev $modemif parent 1:12 handle 30: sfq perturb 10
</source>
 
The code above is a working traffic control script that is even compatible with RHEL5 kernels, for a 1500kbps outbound link (T1, Cable or similar.) In this example, <tt>eth4</tt> is part of a bridge. The code above should work regardless of whether <tt>eth4</tt> is in a bridge or not -- just make sure that <tt>modemif</tt> is set to the interface on which traffic is flowing ''out'' and you wish to apply traffic control.
 
=== <tt>tc</tt> code walkthrough ===
 
This script uses the <tt>tc</tt> command to create two priority classes - 1:10 and 1:12. By default, all traffic goes into the low-priority class, 1:12. 1:10 has priority over 1:12 (<tt>prio 1</tt> vs. <tt>prio 2</tt>,) so if there is any traffic in 1:10 ready to be sent, it will be sent ahead of 1:12. 1:10 has a rate of 700kbit but can use up to the full outbound bandwidth of 1500kbit by borrowing from 1:12.  
 
UDP traffic (traffic that matches <tt>ip protocol 0x11 0xff</tt>) will be put in the high priority class 1:10. This can be good for things like FPS games, to ensure that latency is low and not drowned out by lower-priority traffic.
 
If we stopped here, however, we would get a bit worse results than if we didn't use <tt>tc</tt> at all. We have basically created two outgoing sub-channels of different priorities. The higher priority class ''can'' drown out the lower-priority class, and this is intentional so it isn't the issue -- in this case we ''want'' that functionality. The problem is that the high priority and low priority classes can both be dominated by high-bandwidth flows, causing other traffic flows of the same priority to be drowned out. To fix this, two <tt>sfq</tt> queuing disciplines are added to the high and low priority classes and will ensure that individual traffic flows are identified and each given a fair shot at sending data out of their respective classes. This should prevent starvation within the classes themselves.
 
=== <tt>iptables</tt> code walkthrough ===
 
First note that we are adding netfilter rules to the <tt>POSTROUTING</tt> chain, in the <tt>mangle</tt> table. This table allows us to modify the packets ''right before'' they are queued to be sent out of an interface, which is exactly what we want. At this point, these packets could have been locally-generated or forwarded -- as long as they are on their way to going out of <tt>modemif</tt> (eth4 in this case), the <tt>mangle</tt> <tt>POSTROUTING</tt> chain will see them and we can classify them and perform other useful tweaks.
 
The iptables code puts all traffic with the "minimize-delay" flag (interactive ssh traffic, for example) in the high priority traffic class. In addition, all HTTP, HTTPS and DNS TCP traffic will be classified as high-priority. Remember that all UDP traffic is being classified as high priority via the <tt>tc</tt> rule described above, so this will take care of DNS UDP traffic automatically.
 
=== Further optimizations ===
 
==== SSH ====
 
<source lang="bash">
iptables -t mangle -N tosfix
iptables -t mangle -A tosfix -p tcp -m length --length 0:512 -j RETURN
#allow screen redraws under interactive SSH sessions to be fast:
iptables -t mangle -A tosfix -m hashlimit --hashlimit 20/sec --hashlimit-burst 20 \
--hashlimit-mode srcip,srcport,dstip,dstport --hashlimit-name minlat -j RETURN
iptables -t mangle -A tosfix -j TOS --set-tos Maximize-Throughput
iptables -t mangle -A tosfix -j RETURN
 
iptables -t mangle -A POSTROUTING -p tcp -m tos --tos Minimize-Delay -j tosfix
</source>
 
To use this code, place it ''near the top of the file'', just below the <tt>modemif="eth4"</tt> line, but ''before'' the main <tt>iptables</tt> and <tt>tc</tt> rules. These rules will apply to ''all'' packets about to get queued to any interface, but this is not necessarily a bad thing, since the TCP flags being set are not just specific to our traffic control functionality. To make these rules specific to <tt>modemif</tt>, add "-o $modemif" after "-A POSTROUTING" on the last line, above. As-is, the rules above will set the TCP flags on all packets flowing out of all interfaces, but the the traffic control rules will only take effect for <tt>modemif</tt>, because they are only configured for that interface.
 
SSH is a tricky protocol. By default, all the outgoing SSH traffic is classified as "minimize-delay" traffic, which will cause it to all flow into our high-priority class, even if it is a bulk <tt>scp</tt> transfer running in the background. This code will grab all "minimize-delay" traffic such as SSH and telnet and route it through some special rules. Any individual keystrokes (small packets) will be left as "minimize-delay" packets. For anything else, we will run the <tt>hashlimit</tt> iptables module, which will identify individual outbound flows and allow small bursts of traffic (even big packets) to remain "minimize-delay" packets. These settings have been specifically tuned so that most <tt>GNU screen</tt> screen changes (^A^N) when logging into your server(s) remotely will be fast. Any traffic over these burst limits will be reclassified as "maximize-throughput" and thus will drop to our lower-priority class 1:12. Combined with the traffic control rules, this will allow you to have very responsive SSH sessions into your servers, even if they are doing some kind of bulk outbound copy, like rsync over SSH.
 
Code in our main <tt>iptables</tt> rules will ensure that any "minimize-delay" traffic is tagged to be in the high-priority 1:10 class.
 
What this does is keep interactive SSH and telnet keystrokes in the high-priority class, allow GNU screen full redraws and reasonable full-screen editor scrolling to remain in the high-priority class, while forcing bulk transfers into the lower-priority class.
 
==== ACKs ====
 
<source lang="bash">
iptables -t mangle -N ack
iptables -t mangle -A ack -m tos ! --tos Normal-Service -j RETURN
iptables -t mangle -A ack -p tcp -m length --length 0:128 -j TOS --set-tos Minimize-Delay
iptables -t mangle -A ack -p tcp -m length --length 128: -j TOS --set-tos Maximize-Throughput
iptables -t mangle -A ack -j RETURN
 
iptables -t mangle -A POSTROUTING -p tcp -m tcp --tcp-flags SYN,RST,ACK ACK -j ack
</source>
 
To use this code, place it ''near the top of the file, just below the <tt>modemif="eth4"</tt> line, but ''before'' the main <tt>iptables</tt> and <tt>tc</tt> rules.
 
ACK optimization is another useful thing to do. If we prioritize small ACKs heading out to the modem, it will allow TCP traffic to flow more smoothly without unnecessary delay.  The lines above accomplish this.
 
This code basically sets the "minimize-delay" flag on small ACKs. Code in our main <tt>iptables</tt> rules will then tag these packets so they enter high-priority traffic class 1:10.
 
== Other Links of Interest ==
* http://manpages.ubuntu.com/manpages/maverick/en/man8/ufw.8.html
* https://help.ubuntu.com/community/UFW
 
[[Category:Investigations]]
[[Category:Articles]]
[[Category:Featured]]
[[Category:Networking]]

Revision as of 18:53, February 28, 2015

New OpenGL management in Funtoo

Funtoo is switching to an improved system for managing multiple OpenGL providers (Mesa/Xorg, AMD and nVidia). The update may involve blockers and file collisions.

By Mgorny / February 28, 2015

New OpenGL management

System principles

The new OpenGL management design assumes that the reference OpenGL implementation (mesa/Xorg) is to be used to build packages. After switching to the new system, all packages will use the mesa/Xorg headers and link to the mesa/Xorg libraries. This improves portability of software built on Funtoo and solves some of the build failures when non-standard OpenGL provider was enabled.

The third-party OpenGL libraries and modules provided by proprietary driver vendors can be enabled for run-time program use. They will not affect how the program is built. However, they will be loaded by the dynamic loader when starting executables. The Xorg server will also load the modules provided by blob driver vendor if appropriate.

Implementation

The reference implementation (mesa/Xorg) packages install headers and libraries into standard system locations (/usr/include, /usr/lib*). The compiler and linker finds them using the usual rules and uses them.

The third-party OpenGL vendors install libraries and server extension modules into vendor-named subdirectories of /usr/lib*/opengl. Those files are not used directly.

app-admin/eselect-opengl is used to select OpenGL implementation used at run-time. The choice of implementation is controlled via dynamic linker configuration (ld.so.conf) and Xorg server configuration. If the reference implementation is selected, the eselect module outputs null configuration that causes the linker and server to use the standard paths. If an another implementation is selected, the configuration prepends /usr/lib*/opengl paths to linker and server configuration, causing them to prefer the third-party libraries over reference.