Difference between revisions of "IPv4 calculations"

(The Internet layer and the IP protocol)
m
 
(4 intermediate revisions by one other user not shown)
Line 1: Line 1:
WARNING: Work in progress. Do not edit this article unless you are the original author.
+
{{fancywarning| Work in progress. Do not edit this article unless you are the original author.}}
 
+
  
 
= Refresh on TCP/IP model =  
 
= Refresh on TCP/IP model =  
Line 18: Line 17:
 
== A story of cake ==
 
== A story of cake ==
  
From the beginning, engineers were aware that, also their address pool was gigantic, it had to be used wisely (between 1977 and 1984, the number of hosts grewed from roughly a hundred to more than a ''thousand''): RFC 790 introduced in 1981 mentions the notion of ''network classes''. Just because you guests at a party have different appetites so you split a cake in more or less big slices, organizations have various needs so the IPv4 address pool has to be divided in a set  of more or less big "slices". Depending of their size, those "slices" are said to belong to a ''class'' hence the terminology of ''classeful networks''.  Each organization wanting to connect the ARPAnet had to request the exclusive attribution of an exclusive address block of the IPv4 pool to the Government of the United States (ICANN/IANA and Regional Internet Registries came at the end of nineties), no matter of how it will be used or not internally.
+
From the beginning, engineers were aware that, also their address pool was ''a priori'' gigantic, it had to be used wisely and address pool depletion issues had always been in the landscape: between 1977 and 1984, the number of interconnected hosts grew from roughly a hundred to more than a ''thousand'' and it was clear that this would not reach any limit. Indeed even at this time the problem was more ''when'' the pool will be depleted rather than ''if'' it would become depleted. Seen from those good old eighties, the IPv4 pool depletion was not an immediate problem especially altough the issued started be be a bit more critical a decade later (IETF [https://www.arin.net/knowledge/v4_deplete_v6_adopt.pdf forecasted] an IPv4 depletion between 2010 and 2017 in 1993).
 +
 
 +
Just like you divide your birthday cake in slices of various sizes depending on your guests' appetite to minimize the waste, the engineers choose to divide the initial IPv4 pool in several slices of various size, each one of those slices were then exclusively attributed to a given organization who wanted to connect to the ARPAnet by the Government of the United States (ICANN/IANA and Regional Internet Registries came much later at the end of nineties). How the assigned block was used by an organization was not a concern for the US Government, it assigned address blocks depending on how many hosts ''would be'' on an organization network not on how many hosts ''were'' on a given organisation's network.  
  
= Classful and classless networks =
+
Okay the ARPAnet became defunct with the split of MILNET in 1984 giving NSFNet later replaced with the Internet as we still know today but the principle of exclusively attributing IPv4 blocks remained the same.
  
 
Those addresses follows the thereafter logic:
 
Those addresses follows the thereafter logic:

Latest revision as of 02:33, 18 February 2014

Warning

Work in progress. Do not edit this article unless you are the original author.

Refresh on TCP/IP model

When the ARPANet (a packet oriented network) was born in those good old seventies, engineers had to solve the problem of making computers being able to exchange packets of information over the network and they invented in 1974 something you are now using to view this page: TCP/IP! TCP/IP is a collection of various network protocols, being organized as a stack. Just like your boss does not do everything in the company and delegates at lower levels which in turn delegates at an even more lower level, no protocol in the TCP/IP suite takes all responsibilities, they are working together in a hierarchical and cooperative manner. A level of the TCP/IP stack knows what its immediate lower subordinate can do for it and whatever it will do will be done the right way and will not worry about the manner the job will be done. Also the only problem for a given level of the stack is to fulfill its own duties and deliver the service requested by the upper layer, it does not have to worry about the ultimate goal of what upper levels do.

<illustration goes here TCP/IP model>

The above illustration sounds horribly familiar : yes, it is sounds like this good old OSI model. Indeed it is a tailored view of the original OSI model and it works the exact same way: so the data sent by an application A1 (residing on computer C1) to another application A2 (residing on computer C2) goes through C1's TCP/IP stack (from top to bottom), reach the C1's lower layers that will take the responsibility to move the bits from C1 to C2 over a physical link (electrical or lights pulses, radio waves...sorry no quantum mechanism yet) . C2's lower layers will receive the bits sent by C1 and pass what has been received to the C2's TCP/IP stack (bottom to top) which will pass the data to A2. If C1 and C2 are not on the same network the process is a bit more complex because it involves relays (routers) but the global idea remains the same. Also there is no shortcuts in the process : both TCP/IP stacks are crossed in their whole, either from top to bottom for the sender or bottom to top for the receiver. The transportation process in itself is also absolutely transparent from an application's point of view: A1 knows it can rely on the TCP/IP stack to transmits some data to A2, how the data is transmitted is not its problem, A1 just assumes the data can be transmitted by some means. The TCP/IP stack is also loosely coupled to a particular network technology because its frontier is precisely the physical transportation of bits over a medium and so the physical network's technology, just the same way A1 does not care about how the TCP/IP stack will move the data from one computer to another. The TCP/IP stack itself does not care about the details about how the bits are physically moved and thus it can work with any network technology no matter the technology is Ethernet, Token Ring or FDDI for example.

The Internet layer and the IP protocol

The goal of this article being more focused on calculation of addresses used at the Internet layer so let's forget the gory details about the TCP/IP stack (you can find an extremely detailed discussion in How the TCP/IP stack works... still to be written...). From here, we assume you have a good general understanding of its functionalities and how a network transmission works. As you know the Internet layer is responsible to handle logical addressing issues of a TCP segment (or UDP datagram) that has either to be transmitted over the network to a remote computer or that has been received from the network from a remote computer. That layer is governed by a set of strict set rules called the Internet Protocol or IP originally specified by [RFC 791] in september 1981. What is pretty amazing with IP is that, although its original RFC has been amended by several others since 1981, its specification remains absolutely valid! If have a look at [RFC 791] you won't see "obsoleted". Sure IPv4 reached its limits in this first half the XXIst century but will remains in the IT landscape for probably several years to not say decades (you know, the COBOL language....). To finish on historical details, you might find interesting know that TCP/IP was not the original protocol suite used on the ARAPANet, it superseded in 1983 another protocol suite the Network Control Program or NCP. NCP looks like, from our point of view, prehistoric however it is of big importance as it established a lot of concepts still in use today : protocol data units, splitting an address in various components, connection management concept (TCP) and so on comes from NCP. Historical reward for those who are still reading this long paragraph: first, even a user was addressable in NCP messages second even in 1970 the engineers were concerned by network congestions issues (this page) :-)

Enough of historical background, packet networks history is super interesting but would make this article would just too long. So let's go back to the Internet Protocol! In those good old seventies the engineers who designed the Internet Protocol retained a 32 bits addressing scheme. Why 32 bits and not 64 or 128? In that decade, computers were rare with very limited resources and everyone at that time found that using 32 bits to express the a computer's address on the network would be way far more than enough : 2^32 gives 4,294,967,296 or roughly 4.3 billions (even some briliant visionaries like J.C.Lickleider would probably never have imagined the growth and popularity of computer networks at that time).

A story of cake

From the beginning, engineers were aware that, also their address pool was a priori gigantic, it had to be used wisely and address pool depletion issues had always been in the landscape: between 1977 and 1984, the number of interconnected hosts grew from roughly a hundred to more than a thousand and it was clear that this would not reach any limit. Indeed even at this time the problem was more when the pool will be depleted rather than if it would become depleted. Seen from those good old eighties, the IPv4 pool depletion was not an immediate problem especially altough the issued started be be a bit more critical a decade later (IETF forecasted an IPv4 depletion between 2010 and 2017 in 1993).

Just like you divide your birthday cake in slices of various sizes depending on your guests' appetite to minimize the waste, the engineers choose to divide the initial IPv4 pool in several slices of various size, each one of those slices were then exclusively attributed to a given organization who wanted to connect to the ARPAnet by the Government of the United States (ICANN/IANA and Regional Internet Registries came much later at the end of nineties). How the assigned block was used by an organization was not a concern for the US Government, it assigned address blocks depending on how many hosts would be on an organization network not on how many hosts were on a given organisation's network.

Okay the ARPAnet became defunct with the split of MILNET in 1984 giving NSFNet later replaced with the Internet as we still know today but the principle of exclusively attributing IPv4 blocks remained the same.

Those addresses follows the thereafter logic:

32 bits (fixed length)
Network part (variable length of N bits ) Host part (length : 32 - N bits)
  • The network address : this part is uniquely assigned amongst all of the organizations in the world (i.e. No one in the world can hold the same network part)
  • The host address : unique within a given network part

So in theory we can have something like this (remember the network nature is not to be unique, it hs to be be a collection of networks  :

  • Network 1 Host 1


Just like your birthday cake is divided in more or less smaller parts depending on how guests' appetite, the IPv4 address space has also been divided into more or less smaller parts just because organizations needs more or less computers on their networks. How to make this possible? Simply by dedicating a variable number of bits to the network part! Do you see the consequence? An IPv4 address being always 32 bits wide, the more bits you dedicate to the network part the lesser you have for the host part and vice-versa, this is a tradeoff, always. Basically, having more bits in :

  • the network part : means more networks possible at the cost of having less hosts per network
  • the host part : means less networks but more hosts per network

It might sounds a bit abstract but let's take an example : imagine we dedicate only 8 bits for the network part and the remaining 24 for the hosts part. What happens? First if we only


Is the network part assigned by each organization to itself? Of course not! Assignment are coordinated at the worldwide level by what we call Regional Internet Registries or RIRs which, in turn, can delegate assignments to third-parties located within their geographic jurisdiction. Those latter are called Local Internet Registries or LIRs (the system is detailed in RFC 7020). All of those RIRs are themselves put under the responsibility of now now well-known Internet Assigned Numbers Authority or IANA. As of 2014 five RIR exists :

  • ARIN (American Registry for Internet Numbers) : covers North America
  • LACNIC (Latin America and Caribbean Network Information Centre): covers South America and the Caribbean
  • RIPE-NCC (Réseaux IP Européens / or RIPE Network Coordination Centre): covers Europe, Russia and middle east
  • Afrinic (Africa Network Information Center) : covers the whole Africa
  • APNIC (Asian and Pacific Network Information Centre) : covers oceania and far east.