Difference between pages "Entropy" and "IPv4 calculations"

(Difference between pages)
m
 
(The Internet layer)
 
Line 1: Line 1:
==Entropy Package Management in Gentoo==
+
WARNING: Work in progress. Do not edit this article unless you are the original author.
Entropy Package Manager is written by Fabio Erculliani from Sabayon GNU/Linux as an extension to Portage in order to install binary package same as in other binary-based distros. The package manager syncronises itself automatically with Portage once you installed entropy binary packages, but instead Portage must be syncronised with Entropy in order for Entropy to know what packages you have emerged.
+
Fully written in python, it is a stable application with many binary-oriented features and options, including a complete set of repository creation and entropy server features fully based on Portage ebuild packaging. Henceforth, developers '''must''' ( there is no other way ) emerge packages in order to create entropy packages, the procedure will be detailed in this tutorial.
+
  
==Instructions of creating your own Entropy Repository==
 
  
First of all, you must have package named '''entropy-server''' instaled. It contains a /etc/entropy/server.conf that itself contains the next, most important lines of the configuration:
+
= Refresh on TCP/IP model =
  
  community-mode = enable < ''if you wish to cope with more than 1 repository in one system''
+
When the ARPANet (a packet oriented network) was born in those good old seventies, engineers had to solve the problem of making computers being able to exchange packets of information over the network and they invented in 1974 something you are now using to view this page: TCP/IP! TCP/IP is a collection of various network protocols, being organized as a stack. Just like your boss does not do everything in the company and delegates at lower levels which in turn delegates at an even more lower level, no protocol in the TCP/IP suite takes all responsibilities, they are working together in a hierarchical and cooperative manner. A level of the TCP/IP stack knows what its immediate lower subordinate can do for it and whatever it will do will be done the right way and will not worry about the manner the job will be doneAlso the only problem for a given level of the stack is to fulfill its own duties and deliver the service requested  by the upper layer, it does not have to worry about the ultimate goal of what upper levels do.
+
  community-mode = disable <''if you want to have a self-sustainable dependency repository''
+
 
+
  ...(descriptions)  
+
 
+
  default-repository = yourreponame
+
 
    
 
    
  ...(descriptions) 
+
<illustration goes here TCP/IP model>
  
  #example: #=> repository = myserverrepo|My Server Repository|ftp://user:pass@111.111.111.111/ ssh://username@host:~user/path:port ''just an example of repo mode''
+
The above illustration sounds horribly familiar : yes, it is sounds like this good old OSI model. Indeed it is a tailored view of the original OSI model and it works the exact same way: so the data sent by an application A1 (residing on computer C1) to another application A2 (residing on computer C2) goes through C1's TCP/IP stack (from top to bottom), reach the C1's lower layers that will take the responsibility to move the bits from C1 to C2 over a physical link (electrical or lights pulses, radio waves...sorry no quantum mechanism yet) . C2's lower layers will receive the bits sent by C1 and pass  what has been received to the C2's TCP/IP  stack (bottom to top) which will pass the data to A2. If C1 and C2 are not on the same network the process is a bit more complex because it involves relays (routers) but the global idea remains the same. Also there is no shortcuts in the process : both TCP/IP stacks are crossed in their whole, either from top to bottom for the sender or  bottom to top for the receiver. The transportation process in itself is also absolutely transparent from an application's point of view: A1 knows it can rely on the TCP/IP stack to transmits some data to A2, ''how'' the data is transmitted is not its problem, A1 just assumes the data can be transmitted by some means. The TCP/IP stack is also loosely coupled to a particular network technology because its frontier is precisely the physical transportation of bits over a medium and so the physical network's technology,  just the same way A1 does not care about how the TCP/IP stack will move the data from one computer to another. The TCP/IP stack itself does not care about the details about how the bits are physically moved and thus it can work with any network technology no matter the technology is Ethernet, Token Ring or FDDI for example.
  
  repository = yourreponame|My Server Repository|ftp://user:pass@111.111.111.111/ ssh://username@host:~user/path:port
+
= The Internet layer =
  
The rest of them you don't need necessarily to bother. Of course, as in the example, you need either a '''SSH''' server or '''FTP''' server with upload permissions, of course. The structure of the repository should look like this:
+
The goal of this article being more focused on calculation of addresses used at the ''Internet layer'' so  let's forget the gory details of the TCP/IP stack works (you can find an extremely detailed discussion in [[How the TCP/IP stack works]]...  to be written...). From here, we assume you have a good general understanding of its functionalities and how a network transmission works. As you know the ''Internet'' layer is responsible to handle logical addressing issues of a TCP segment (or UDP datagram) that has either to be transmitted over the network to a remote computer or that has been received from the network from a remote computer. That layer is governed by a set of strict set rules called the ''Internet Protocol'' or ''IP'' originally specified by [RFC 791] in september 1981. What is pretty amazing with IP is that, although its original RFC has been amended by several others since 1981, its specification remains absolutely valid! If have a look at [RFC 791] you won't see "obsoleted". Sure IPv4 reached its limits in this first half the XXIst century but will remains in the IT landscape for probably several years to not say decades (you know, the COBOL language....). To finish on historical details, you might find interesting know that TCP/IP was not the original protocol suite used on the ARAPANet, it superseded in 1983 another protocol suite the [http://en.wikipedia.org/wiki/Network_Control_Program Network Control Program]. NCP looks like, from our point of view, quite prehistoric but it is of big importance as it established a lot of concepts still in use today : PDU, splitting an address in various components, connection management and so on comes from NCP. Historical reward  for those who are still reading this long paragraph: first, even a computer user was addressable in NCP messages second even in 1970 the engineers were concerned by network congestions issues ([http://www.cs.utexas.edu/users/chris/think/ARPANET/Timeline this page]).
  http://bpr.bluepink.ro/~rogentos/entropy/
+
  
'''P.S.: I considered this step as being the most important one, since everybody firstly installs the package before reading the article/tutorial on how to use :)'''
+
Let's go back to those good old seventies: the engineers who designed the Internet Protocol retained a 32 bits addressing scheme for IP and, afterall, the ARAPnet will never have the need to be able to address  billions of hosts! If you look at some ARAPANet diagrams it counted less than 100 hosts in
  
 +
who would ''ever'' need millions of addresses afterall?  So in theory with those 32 bits we can have around 4 billions of computers within that network and arbitrarily retain that the very first connected computer must be given the number "0", the second one "1", the third one "2" and so on until we exhaust the address pool at number 4294967295 giving no more than 4294967296 (2^32) computers on that network because no number can be a duplicate.
  
==Installation and package management instructions==
+
= Classful and classless networks =
Start emerging the following packages:
+
  emerge sys-apps/entropy equo entropy-server -vp
+
It should produce something like this: http://pastebin.com/cy7X38ia ( public and permanent pastebin ). Notes: these packages have been built on 5 minutes funtoo tar.gz unpacking and chrooting and after, of course, a emerge --sync and a eselect profile set.
+
You should have now a working '''equo''', so run the command: '''equo --help'''. Should show up all the help commands:
+
  
  blacknoxis / # equo --help
+
Those addresses follows the thereafter logic:
  usage: equo [-h] [--color]
+
  (...)
+
  
In this moment you should have a working repository and '''SSH/FTP''' server with '''/etc/entropy/server.conf''' pointed to it. Start learning the commands also.
+
{| class="wikitable"
 +
|-
 +
| colspan="2" | '''32 bits (fixed length)'''
 +
|-
 +
| '''Network''' part (variable length of N bits ) || '''Host''' part (length : 32 - N bits)
 +
|}
  
If you look into '''/etc/entropy/repositories.conf.d/''' there's the repositories location with all the ''entropy_*'' files. You can take that example in order to put your own repository for client use.
+
* The network address : this part is uniquely assigned amongst all of the organizations in the world (i.e. No one in the world can hold the same network part) 
 +
* The host address :  unique within a given network part
  
 +
So in theory we can have something like this (remember the network nature is not to be unique, it hs to be be a collection of networks  :
  
==Working with EIT==
+
* Network 1 Host 1
First things first, you must 'regenerate' the Entropy database system by running the command:
+
*
  
  equo rescue generate
 
  
EIT is the tool that actually packages already emerged packages and introduces them into your remote repository. First initialize repo ( after configurind your /etc/entropy/server.conf ) with the command:
+
Just like your birthday cake is divided in more or less smaller parts depending on how guests' appetite, the IPv4 address space has also been divided into more or less smaller parts just because organizations needs more or less computers on their networks. How to make this possible? Simply by dedicating a variable number of bits to the network part! Do you see the consequence? An IPv4 address being '''always''' 32 bits wide, the more bits you dedicate to the network part the lesser you have for the host part and vice-versa, this is a tradeoff, always. Basically, having more bits in :
 +
* the network part : means more networks possible at the cost of having less hosts per network 
 +
* the host part : means less networks but more hosts per network
  
  eit init reponame
+
It might sounds a bit abstract but let's take an example : imagine we dedicate only 8 bits for the network part and the remaining 24 for the hosts part. What happens?  First if we only
  
And a demonstration of adding a package to a repo by using these three commands:
+
  
  emerge packagename
+
Is the network part assigned by each organization to itself? Of course not! Assignment are coordinated at the worldwide level by what we call Regional Internet Registries or RIRs which, in turn, can delegate assignments to third-parties located within their geographic jurisdiction. Those latter are called Local Internet Registries or LIRs (the system is detailed in RFC 7020). All of those RIRs are themselves put under the responsibility of now now well-known Internet Assigned Numbers Authority or [http://www.iana.org IANA]. As of 2014 five RIR exists :
 
+
  eit add packagename
+
* ARIN (American Registry for Internet Numbers) : covers North America
 
+
* LACNIC (Latin America and Caribbean Network Information Centre): covers South America and the Caribbean
  eit push
+
* RIPE-NCC (Réseaux IP Européens / or RIPE Network Coordination Centre): covers Europe, Russia and middle east
 
+
* Afrinic (Africa Network Information Center) : covers the whole Africa
All demo in: http://pastebin.com/k3PNpPdD ( public permanent pastebin )
+
* APNIC (Asian and Pacific Network Information Centre) : covers oceania and far east.
 
+
 
+
==Useful and important Tips and Tricks==
+
As foretold, Entropy does not recognize Portage installs unless you do a little step. There's a slight trick here on how to make Entropy keep your emerged package with your options. First, emerge a package, any package, then run the command:
+
 
+
  equo rescue spmsync --ask
+
 
+
( Accept but be careful, it sometimes takes something that you do not want )
+
This command will make Entropy aware that you installed/compiled something with Portage, but Entropy will still try to upgrade it and return it to the generic entropy repository one. Therefor, you must change the entropy '''client.conf''' from file '''/etc/entropy/client.conf''':
+
 
+
  ignore-spm-downgrades = '''enable''' < keep this ''enabled''
+
 
+
 
+
After this, '''equo update''' should do the trick and '''equo ugprade''' will not try to overwrite your portage emerged packages.
+
Referential documentation in: http://wiki.sabayon.org/ on Equo / Entropy and short and old doc on EIT http://lxnay.wordpress.com/2011/10/18/eit-the-stupid-package-tracker-reinvented/
+
 
+
If you want to have a nice, eye catchy GUI installer for your entropy repository, just install Rigo with either equo or portage. If you want to test the concept further, use emerge to install Rigo and then, of course, '''equo rescue spmsync --ask'''. Actually, it is '''mandatory''' after each emerge package, you should run '''equo rescue spmsync --ask''' in order to at least make Entropy aware of your Portage changes.
+

Revision as of 17:52, January 16, 2014

WARNING: Work in progress. Do not edit this article unless you are the original author.


Refresh on TCP/IP model

When the ARPANet (a packet oriented network) was born in those good old seventies, engineers had to solve the problem of making computers being able to exchange packets of information over the network and they invented in 1974 something you are now using to view this page: TCP/IP! TCP/IP is a collection of various network protocols, being organized as a stack. Just like your boss does not do everything in the company and delegates at lower levels which in turn delegates at an even more lower level, no protocol in the TCP/IP suite takes all responsibilities, they are working together in a hierarchical and cooperative manner. A level of the TCP/IP stack knows what its immediate lower subordinate can do for it and whatever it will do will be done the right way and will not worry about the manner the job will be done. Also the only problem for a given level of the stack is to fulfill its own duties and deliver the service requested by the upper layer, it does not have to worry about the ultimate goal of what upper levels do.

<illustration goes here TCP/IP model>

The above illustration sounds horribly familiar : yes, it is sounds like this good old OSI model. Indeed it is a tailored view of the original OSI model and it works the exact same way: so the data sent by an application A1 (residing on computer C1) to another application A2 (residing on computer C2) goes through C1's TCP/IP stack (from top to bottom), reach the C1's lower layers that will take the responsibility to move the bits from C1 to C2 over a physical link (electrical or lights pulses, radio waves...sorry no quantum mechanism yet) . C2's lower layers will receive the bits sent by C1 and pass what has been received to the C2's TCP/IP stack (bottom to top) which will pass the data to A2. If C1 and C2 are not on the same network the process is a bit more complex because it involves relays (routers) but the global idea remains the same. Also there is no shortcuts in the process : both TCP/IP stacks are crossed in their whole, either from top to bottom for the sender or bottom to top for the receiver. The transportation process in itself is also absolutely transparent from an application's point of view: A1 knows it can rely on the TCP/IP stack to transmits some data to A2, how the data is transmitted is not its problem, A1 just assumes the data can be transmitted by some means. The TCP/IP stack is also loosely coupled to a particular network technology because its frontier is precisely the physical transportation of bits over a medium and so the physical network's technology, just the same way A1 does not care about how the TCP/IP stack will move the data from one computer to another. The TCP/IP stack itself does not care about the details about how the bits are physically moved and thus it can work with any network technology no matter the technology is Ethernet, Token Ring or FDDI for example.

The Internet layer

The goal of this article being more focused on calculation of addresses used at the Internet layer so let's forget the gory details of the TCP/IP stack works (you can find an extremely detailed discussion in How the TCP/IP stack works... to be written...). From here, we assume you have a good general understanding of its functionalities and how a network transmission works. As you know the Internet layer is responsible to handle logical addressing issues of a TCP segment (or UDP datagram) that has either to be transmitted over the network to a remote computer or that has been received from the network from a remote computer. That layer is governed by a set of strict set rules called the Internet Protocol or IP originally specified by [RFC 791] in september 1981. What is pretty amazing with IP is that, although its original RFC has been amended by several others since 1981, its specification remains absolutely valid! If have a look at [RFC 791] you won't see "obsoleted". Sure IPv4 reached its limits in this first half the XXIst century but will remains in the IT landscape for probably several years to not say decades (you know, the COBOL language....). To finish on historical details, you might find interesting know that TCP/IP was not the original protocol suite used on the ARAPANet, it superseded in 1983 another protocol suite the Network Control Program. NCP looks like, from our point of view, quite prehistoric but it is of big importance as it established a lot of concepts still in use today : PDU, splitting an address in various components, connection management and so on comes from NCP. Historical reward for those who are still reading this long paragraph: first, even a computer user was addressable in NCP messages second even in 1970 the engineers were concerned by network congestions issues (this page).

Let's go back to those good old seventies: the engineers who designed the Internet Protocol retained a 32 bits addressing scheme for IP and, afterall, the ARAPnet will never have the need to be able to address billions of hosts! If you look at some ARAPANet diagrams it counted less than 100 hosts in

who would ever need millions of addresses afterall? So in theory with those 32 bits we can have around 4 billions of computers within that network and arbitrarily retain that the very first connected computer must be given the number "0", the second one "1", the third one "2" and so on until we exhaust the address pool at number 4294967295 giving no more than 4294967296 (2^32) computers on that network because no number can be a duplicate.

Classful and classless networks

Those addresses follows the thereafter logic:

32 bits (fixed length)
Network part (variable length of N bits ) Host part (length : 32 - N bits)
  • The network address : this part is uniquely assigned amongst all of the organizations in the world (i.e. No one in the world can hold the same network part)
  • The host address : unique within a given network part

So in theory we can have something like this (remember the network nature is not to be unique, it hs to be be a collection of networks  :

  • Network 1 Host 1


Just like your birthday cake is divided in more or less smaller parts depending on how guests' appetite, the IPv4 address space has also been divided into more or less smaller parts just because organizations needs more or less computers on their networks. How to make this possible? Simply by dedicating a variable number of bits to the network part! Do you see the consequence? An IPv4 address being always 32 bits wide, the more bits you dedicate to the network part the lesser you have for the host part and vice-versa, this is a tradeoff, always. Basically, having more bits in :

  • the network part : means more networks possible at the cost of having less hosts per network
  • the host part : means less networks but more hosts per network

It might sounds a bit abstract but let's take an example : imagine we dedicate only 8 bits for the network part and the remaining 24 for the hosts part. What happens? First if we only


Is the network part assigned by each organization to itself? Of course not! Assignment are coordinated at the worldwide level by what we call Regional Internet Registries or RIRs which, in turn, can delegate assignments to third-parties located within their geographic jurisdiction. Those latter are called Local Internet Registries or LIRs (the system is detailed in RFC 7020). All of those RIRs are themselves put under the responsibility of now now well-known Internet Assigned Numbers Authority or IANA. As of 2014 five RIR exists :

  • ARIN (American Registry for Internet Numbers) : covers North America
  • LACNIC (Latin America and Caribbean Network Information Centre): covers South America and the Caribbean
  • RIPE-NCC (Réseaux IP Européens / or RIPE Network Coordination Centre): covers Europe, Russia and middle east
  • Afrinic (Africa Network Information Center) : covers the whole Africa
  • APNIC (Asian and Pacific Network Information Centre) : covers oceania and far east.