ETHW

(Difference between pages)
 
(The Internet layer)
 
Line 1: Line 1:
[[Category:HOWTO]]
+
WARNING: Work in progress. Do not edit this article unless you are the original author.
  
This howto will describe a method for automatically backing up your funtoo install to the internet, in this case dropbox, but any online storage will do. Gentoo describes a method of creating a stage 4 archive. The problem with a stage 4 is that it is large and it archives a lot of unnecessary files. Such as applications that can be reinstalled from an emerge world. Instead, this method will aim for more of a "stage 3.5."
 
  
{{fancynote| This method does not attempt to backup everything. The intention is only to backup the system. Optionally you can also archive and copy your <tt>/home</tt> folder if you have enough online storage.}}
+
= Refresh on TCP/IP model =
  
== Use Case ==
+
When the ARPANet (a packet oriented network) was born in those good old seventies, engineers had to solve the problem of making computers being able to exchange packets of information over the network and they invented in 1974 something you are now using to view this page: TCP/IP! TCP/IP is a collection of various network protocols, being organized as a stack. Just like your boss does not do everything in the company and delegates at lower levels which in turn delegates at an even more lower level, no protocol in the TCP/IP suite takes all responsibilities, they are working together in a hierarchical and cooperative manner. A level of the TCP/IP stack knows what its immediate lower subordinate can do for it and whatever it will do will be done the right way and will not worry about the manner the job will be done. Also the only problem for a given level of the stack is to fulfill its own duties and deliver the service requested  by the upper layer, it does not have to worry about the ultimate goal of what upper levels do.  
A backup machine currently provides network drives on a home LAN to allow clients on the LAN to backup to, using apps such as Time Machine (Mac) and Genie Timeline (Windows). As this machine ''is'' the backup machine it doesn't have anywhere to backup to itself. In this situation a backup solution is provided by backing up to somewhere online - dropbox. If a restore from the backup is required, the client machine's backups would be trashed, and the backup machine restored.
+
 
 +
<illustration goes here TCP/IP model>
  
== Automatic Backup Archives With Etckeeper ==
+
The above illustration sounds horribly familiar : yes, it is sounds like this good old OSI model. Indeed it is a tailored view of the original OSI model and it works the exact same way: so the data sent by an application A1 (residing on computer C1) to another application A2 (residing on computer C2) goes through C1's TCP/IP stack (from top to bottom), reach the C1's lower layers that will take the responsibility to move the bits from C1 to C2 over a physical link (electrical or lights pulses, radio waves...sorry no quantum mechanism yet) . C2's lower layers will receive the bits sent by C1 and pass  what has been received to the C2's TCP/IP  stack (bottom to top) which will pass the data to A2. If C1 and C2 are not on the same network the process is a bit more complex because it involves relays (routers) but the global idea remains the same. Also there is no shortcuts in the process : both TCP/IP stacks are crossed in their whole, either from top to bottom for the sender or  bottom to top for the receiver. The transportation process in itself is also absolutely transparent from an application's point of view: A1 knows it can rely on the TCP/IP stack to transmits some data to A2, ''how'' the data is transmitted is not its problem, A1 just assumes the data can be transmitted by some means. The TCP/IP stack is also loosely coupled to a particular network technology because its frontier is precisely the physical transportation of bits over a medium and so the physical network's technology,  just the same way A1 does not care about how the TCP/IP stack will move the data from one computer to another. The TCP/IP stack itself does not care about the details about how the bits are physically moved and thus it can work with any network technology no matter the technology is Ethernet, Token Ring or FDDI for example.
Etckeeper is a tool that is used to save versions of <tt>/etc</tt>, including meta-data in a version control repository such as git.
+
As etckeeper is not in the funtoo portage tree, layman is used to provide an overlay.
+
=== Install etckeeper via layman ===
+
Before you install layman it is worth mentioning that you probably want <tt>USE="git subversion"</tt> in <tt>/etc/portage/make.conf</tt>. After adjusting use flags, to install layman you run:
+
<console>
+
###i## emerge layman
+
</console>
+
In order to backup the layman configuration, but not the portage overlay trees, make the following modifications to the default install.
+
Tell Portage about layman-fetched repositories by adding the following line to <tt>/etc/portage/make.conf</tt>:
+
  
 +
= The Internet layer =
  
<pre>
+
The goal of this article being more focused on calculation of addresses used at the ''Internet layer'' so  let's forget the gory details of the TCP/IP stack works (you can find an extremely detailed discussion in [[How the TCP/IP stack works]]...  to be written...). From here, we assume you have a good general understanding of its functionalities and how a network transmission works. As you know the ''Internet'' layer is responsible to handle logical addressing issues of a TCP segment (or UDP datagram) that has either to be transmitted over the network to a remote computer or that has been received from the network from a remote computer. That layer is governed by a set of strict set rules called the ''Internet Protocol'' or ''IP'' originally specified by [RFC 791] in september 1981. What is pretty amazing with IP is that, although its original RFC has been amended by several others since 1981, its specification remains absolutely valid! If have a look at [RFC 791] you won't see "obsoleted". Sure IPv4 reached its limits in this first half the XXIst century but will remains in the IT landscape for probably several years to not say decades (you know, the COBOL language....). To finish on historical details, you might find interesting know that TCP/IP was not the original protocol suite used on the ARAPANet, it superseded in 1983 another protocol suite the [http://en.wikipedia.org/wiki/Network_Control_Program Network Control Program]. NCP looks like, from our point of view, quite prehistoric but it is of big importance as it established a lot of concepts still in use today : PDU, splitting an address in various components, connection management and so on comes from NCP. Historical reward  for those who are still reading this long paragraph: first, even a computer user was addressable in NCP messages second even in 1970 the engineers were concerned by network congestions issues ([http://www.cs.utexas.edu/users/chris/think/ARPANET/Timeline this page]).
source /etc/layman/make.conf
+
</pre>
+
  
Modify the following lines in <tt>/etc/layman/layman.cfg</tt>:
+
Let's go back to those good old seventies: the engineers who designed the Internet Protocol retained a 32 bits addressing scheme for IP and, afterall, the ARAPnet will never have the need to be able to address  billions of hosts! If you look at some ARAPANet diagrams it counted less than 100 hosts in  
  
 +
who would ''ever'' need millions of addresses afterall?  So in theory with those 32 bits we can have around 4 billions of computers within that network and arbitrarily retain that the very first connected computer must be given the number "0", the second one "1", the third one "2" and so on until we exhaust the address pool at number 4294967295 giving no more than 4294967296 (2^32) computers on that network because no number can be a duplicate.
  
<pre>
+
= Classful and classless networks =
storage  : /var/lib/layman
+
installed : /etc/layman/installed.xml
+
make_conf : /etc/layman/make.conf
+
</pre>
+
  
Add the bgo-overlay. As described on their web page, [http://bgo.zugaina.org/ bgo.zugaina.org].
+
Those addresses follows the thereafter logic:
<console>
+
###i## layman -o http://gpo.zugaina.org/lst/gpo-repositories.xml -L
+
###i## layman -a bgo-overlay -o http://gpo.zugaina.org/lst/gpo-repositories.xml
+
</console>
+
  
More information about layman can be found here: http://www.gentoo.org/proj/en/overlays/userguide.xml
+
{| class="wikitable"
 +
|-
 +
| colspan="2" | '''32 bits (fixed length)'''
 +
|-
 +
| '''Network''' part (variable length of N bits ) || '''Host''' part (length : 32 - N bits)
 +
|}
  
Then unmask and install etckeeper.
+
* The network address : this part is uniquely assigned amongst all of the organizations in the world (i.e. No one in the world can hold the same network part) 
<console>
+
* The host address :  unique within a given network part
###i## emerge etckeeper --autounmask-write
+
###i## emerge etckeeper
+
</console>
+
  
{{fancynote| To update layman overlays do:}}
+
So in theory we can have something like this (remember the network nature is not to be unique, it hs to be be a collection of networks  :
<console>
+
###i## layman -S
+
</console>
+
  
 +
* Network 1 Host 1
 +
*
  
If you see the following error -- apply this fix:
 
<console>
 
###i## emerge etckeeper
 
Calculating dependencies... done!
 
>>> Verifying ebuild manifests
 
!!! A file is not listed in the Manifest: '/var/lib/layman/bgo-overlay/sys-apps/etckeeper/files/etckeeper-gentoo-0.58.patch'
 
  
###i## cd /var/lib/layman/bgo-overlay/sys-apps/etckeeper
+
Just like your birthday cake is divided in more or less smaller parts depending on how guests' appetite, the IPv4 address space has also been divided into more or less smaller parts just because organizations needs more or less computers on their networks. How to make this possible? Simply by dedicating a variable number of bits to the network part! Do you see the consequence? An IPv4 address being '''always''' 32 bits wide, the more bits you dedicate to the network part the lesser you have for the host part and vice-versa, this is a tradeoff, always. Basically, having more bits in :
###i## ebuild etckeeper-0.58-r2.ebuild manifest
+
* the network part : means more networks possible at the cost of having less hosts per network 
###i## emerge etckeeper
+
* the host part : means less networks but more hosts per network
</console>
+
  
== Configure etckeeper ==
+
It might sounds a bit abstract but let's take an example : imagine we dedicate only 8 bits for the network part and the remaining 24 for the hosts part. What happens?  First if we only
Move any config files that do not live in <tt>/etc</tt>. i.e.
+
Check <tt>/root</tt> for any files to be archive, such as iptables scripts and move them to <tt>/etc</tt>.
+
  
{{fancynote| because funtoo uses [[Boot-Update]], this means <tt>/boot/grub/grub.cfg</tt> does not need to be archived.}}
+
  
To ensure your portage world file is archived, make the following link:
+
Is the network part assigned by each organization to itself? Of course not! Assignment are coordinated at the worldwide level by what we call Regional Internet Registries or RIRs which, in turn, can delegate assignments to third-parties located within their geographic jurisdiction. Those latter are called Local Internet Registries or LIRs (the system is detailed in RFC 7020). All of those RIRs are themselves put under the responsibility of now now well-known Internet Assigned Numbers Authority or [http://www.iana.org IANA]. As of 2014 five RIR exists :
<console>
+
###i## ln /var/lib/portage/world /etc/world
+
* ARIN (American Registry for Internet Numbers) : covers North America
</console>
+
* LACNIC (Latin America and Caribbean Network Information Centre): covers South America and the Caribbean
 
+
* RIPE-NCC (Réseaux IP Européens / or RIPE Network Coordination Centre): covers Europe, Russia and middle east
Initialise the git repository.
+
* Afrinic (Africa Network Information Center) : covers the whole Africa
<console>
+
* APNIC (Asian and Pacific Network Information Centre) : covers oceania and far east.
###i## etckeeper init
+
Initialized empty Git repository in /etc/.git/
+
###i## etckeeper commit "Initial commit."
+
</console>
+
 
+
If you don't already have cron installed, emerge it now.
+
<console>
+
###i## emerge vixie-cron
+
</console>
+
And write the cron job to save an hourly version of <tt>/etc</tt>.
+
 
+
{{fancynote| git will only create a new version (commit) if there are changes from the previous one.}}
+
Edit the file <tt>/etc/cron.hourly/etckeeper:
+
 
+
 
+
<pre>
+
#! /bin/bash
+
etckeeper commit "Hourly auto-commit"
+
</pre>
+
 
+
== Encrypt and copy backups online ==
+
=== Copy To Dropbox ===
+
<console>
+
###i## emerge dropbox
+
</console>
+
 
+
Add a dropbox user:
+
<console>
+
###i## useradd dropbox
+
</console>
+
 
+
Write the dropbox init files in <tt>/etc/conf.d/dropbox</tt>:
+
 
+
<pre>
+
DROPBOX_USERS="dropbox"
+
</pre>
+
<br>
+
<pre>
+
#!/sbin/runscript
+
# Copyright 1999-2004 Gentoo Foundation
+
# Distributed under the terms of the GNU General Public License, v2 or later
+
# $Header: /var/cvsroot/gentoo-x86/sys-fs/dropbox/files/dropbox.init-1.0,v 1.4 2007/04/04 13:35:25 cardoe Exp $
+
 
+
NICENESS=5
+
 
+
depend() {
+
    need localmount net
+
    after bootmisc
+
}
+
 
+
start() {
+
    ebegin "Starting dropbox..."
+
    for dbuser in $DROPBOX_USERS; do
+
        start-stop-daemon -S -b -m --pidfile /var/run/dropbox-$dbuser.pid  -N $NICENESS -u $dbuser -v -e HOME="/home/$dbuser" -x /opt/dropbox/dropboxd
+
    done
+
    eend $?
+
}
+
 
+
stop() {
+
    ebegin "Stopping dropbox..."
+
    for dbuser in $DROPBOX_USERS; do
+
        start-stop-daemon --stop --pidfile /var/run/dropbox-$dbuser.pid
+
    done
+
    eend $?
+
}
+
 
+
status() {
+
    for dbuser in $DROPBOX_USERS; do
+
        if [ -e /var/run/dropbox-$dbuser.pid ] ; then
+
            echo "dropboxd for USER $dbuser: running."
+
        else
+
            echo "dropboxd for USER $dbuser: not running."
+
        fi
+
    done
+
    eend $?
+
}
+
</pre>
+
Start dropbox now and at boot time:
+
<console>
+
###i## chmod 0755 /etc/init.d/dropbox
+
###i## /etc/init.d/dropbox start
+
###i## rc-update add dropbox default
+
</console>
+
 
+
After starting the dropbox daemon, it will provide a http link. You will need to visit this site just once to associate your computer with your dropbox account.
+
 
+
Write the cron job to make the backup archive and move it online. Edit the file <tt>/etc/cron.daily/backup</tt>:
+
 
+
 
+
<pre>
+
#! /bin/bash
+
cd /etc
+
git bundle create /tmp/backup.bundle --all
+
cd /tmp
+
mv -v -f backup.bundle /home/dropbox/Dropbox/Private/
+
</pre>
+
 
+
Make the script executable:
+
<console>
+
###i## chmod +x /etc/cron.daily/backup
+
</console>
+
 
+
=== Encrypt Backups ===
+
It is a good idea to encrypt your backup before moving it online. This can be done with gpg, using a symmetric (password only) or public/private key encryption. Additionally you can chose to sign the backup to check its integrity before restoring.
+
<console>
+
###i## emerge gpg
+
</console>
+
 
+
==== Symmetric Encryption ====
+
There is no preparation required to use a symmetric key as all that is required is simply a passphrase.  Just modify the cron job. Edit <tt>/etc/cron.daily/backup</tt>:
+
 
+
 
+
<pre>
+
#! /bin/bash
+
cd /etc
+
git bundle create /tmp/backup.bundle --all
+
cd /tmp
+
echo 'encryption_password' | gpg -o backup.gpg --batch --homedir /root/.gnupg -vvv  --passphrase-fd 0 --yes -c backup.bundle
+
mv -v -f router.gpg /home/dropbox/Dropbox/Private/
+
</pre>
+
{{fancyimportant| Remember to change "encryption_password"}}
+
 
+
{{fancywarning| If you forget this password the backup will be unusable. Lose the password and you lose the backup.}}
+
 
+
As there is now sensitive information in this file, you might want to remove read permission:
+
<console>
+
###i## chmod og-r /etc/cron.daily/backup
+
</console>
+
 
+
==== Private/Public key Encryption ====
+
Make a private/public encryption/decryptions key pair. The public key will be used to encrypt and the private key to decrypt.
+
<console>
+
###i## gpg --gen-key
+
</console>
+
The public key is used to create the encrypted backup and needs to live on the computer being backed up. A copy of the private key needs to be made and stored securely in another place. If this machine becomes unbootable, and this is the only place the private key lives, the backup dies with it.
+
The private key should not be kept:
+
# In the same place as the back up
+
# On the machine being backed up
+
{{fancynote| The private key is the only key that will decrypt the backup. Lose this key and/or it's password and you lose the backup.}}
+
 
+
List the private keys:
+
<console>
+
###i## gpg -K
+
/root/.gnupg/secring.gpg
+
------------------------
+
sec  2048R/0EF13559 2012-01-21
+
uid                  my_key <noone@example.com>
+
ssb  2048R/67417FEB 2012-01-21
+
</console>
+
 
+
The private key can be exported using either the key name or key number. In this case "my_key" or "0EF13559".
+
To cut and paste the key. Ie, if logging in remotely.
+
<console>
+
###i## gpg -a --export-secret-key 0EF13559
+
</console>
+
 
+
To create a key file:
+
<console>
+
###i## gpg -o private_decryption.gpgkey --export-secret-key 0EF13559
+
</console>
+
 
+
Now store this key somewhere secure. The backup is only as secure as the private key.
+
 
+
Modify the cron job at <tt>/etc/cron.daily/backup</tt>:
+
 
+
 
+
<pre>
+
#! /bin/bash
+
cd /etc
+
git bundle create /tmp/backup.bundle --all
+
cd /tmp
+
gpg -o backup.gpg -r 'my-key' --batch --homedir /root/.gnupg -vvv  --passphrase-fd 0 --yes -e backup.bundle
+
mv -v -f backup.gpg /home/dropbox/Dropbox/Private/
+
</pre>
+
 
+
Replace "my-key" with the appropriate name from the key list.
+
Also note the change from -c for symmetric encryption to -e for private/public key encryption
+
 
+
==== Sign Backups ====
+
Create a 2nd private/public (signing) key pair. The private key is used to sign and the public key is used to check the authenticity/integrity.
+
<console>
+
###i## gpg --gen-key
+
</console>
+
 
+
{{fancynote| The password for this key will be required in the script below.}}
+
In this case the private key is required to sign the backup and the public key is used to check the integrity of the backup.
+
Follow a similar process as above to copy the public key to to another computer/storage media.
+
 
+
List the private keys:
+
<console>
+
###i## gpg -k
+
</console>
+
{{fancynote| <tt>-K</tt> lists private keys while <tt>-k</tt> lists public keys.}}
+
 
+
Then export this public key via cut and paste:
+
<console>
+
###i## gpg -a --export <key name or number>
+
</console>
+
 
+
Or to create a key file:
+
<console>
+
###i## gpg -o public_signing.gpgkey --export <key name or number>
+
</console>
+
 
+
Now store this key somewhere secure.
+
 
+
Modify the backup cron job at <tt>/etc/cron.daily/backup</tt>:
+
 
+
 
+
<pre>
+
#! /bin/bash
+
cd /etc
+
git bundle create /tmp/backup.bundle --all
+
cd /tmp
+
echo 'signing_key_password' | gpg -s -o backup.gpg -r 'my-encryption-key' --batch --homedir /root/.gnupg -vvv  --passphrase-fd 0 --yes -e backup.bundle
+
mv -v -f backup.gpg /home/dropbox/Dropbox/Private/
+
</pre>
+
 
+
{{fancynote| the script will require the password for your private (signing) key to sign the backup. Replace "password" with the password for your signing private key.
+
And as there is sensitive information in this file don't forget to remove read permission.}}
+
<console>
+
###i## chmod og-r /etc/cron.daily/backup
+
</console>
+
 
+
== To Restore From A Backup ==
+
This restore will assume your are starting with a new blank disk.
+
Start by performing a stage 3 install, upto and including section 5 "Chroot into your new system." http://www.funtoo.org/wiki/Funtoo_Linux_Installation
+
 
+
Then the restore process is:
+
# Download backup from dropbox
+
# Decrypt
+
# Clone
+
# Link world file
+
# Emerge world
+
# Compile the kernel
+
# Restore grub bootloader
+
# Reboot
+
 
+
== Download backup from dropbox ==
+
Log into your dropbox account and find your backup file. Move it to a public area if it isn't already in one. Then right click on it and click "copy public link."
+
Now on the computer to be restored, delete the contents of the /etc folder and download the backup file.
+
 
+
(Need to check if this needs done before chrooting into the new install).
+
<console>
+
###i## cd /etc
+
###i## rm -rf *
+
###i## cd /tmp
+
###i## wget http://dl.dropbox.com/link-to-backup-file/backup.gpg
+
</console>
+
 
+
{{fancynote| if you have to copy the link from another computer and therefore can not cut and paste it, there is a "shorten link" option.}}
+
 
+
== Decrypt ==
+
If you used a public/private key to encrypt, and optionally signed the backup, import the decryption and signing keys.
+
 
+
Note:
+
# The decryption key is the private key of the encryption key pair - private_decryption.gpgkey
+
# The signing key is the public key of the signing key pair - public_signing.gpgkey
+
 
+
To import the keys by cut and paste:
+
<console>
+
###i## gpg --import <<EOF
+
</console>
+
{{fancynote| The last line after pasting the key should be "EOF"}}
+
Repeat for both keys.
+
 
+
To import the keys by file:
+
<console>
+
###i## gpg --import private_decryption.gpgkey
+
###i## gpg --import public_signing.gpgkey
+
</console>
+
 
+
Decrypt the backup:
+
<console>
+
###i## gpg -d backup.gpg > backup.bundle
+
</console>
+
 
+
If the backup was signed and you have correctly imported the signing public key you should see a message similar to:
+
<console>
+
gpg: Good signature from "my_signing_key <noone@example.com>"
+
</console>
+
 
+
== Clone ==
+
<console>
+
###i## git clone /tmp/backup.bundle /etc/
+
</console>
+
 
+
== Link world file ==
+
<console>
+
###i## ln /etc/world /var/lib/portage/world
+
</console>
+
 
+
== Emerge world ==
+
<console>
+
###i## emerge --sync
+
###i## layman -S
+
###i## emerge -uDaNv world
+
</console>
+
 
+
== Compile the kernel (genkernel)==
+
If you have genkernel set to save config files (the default):
+
<console>
+
###i## cp /etc/kernels/kernel-config-x86_64-<latest version>-gentoo /usr/src/linux/.config
+
</console>
+
 
+
Otherwise use the currently loaded kernel's config:
+
<console>
+
###i## zcat /proc/config.gz > /usr/src/linux/.config
+
</console>
+
 
+
Then compile the kernel:
+
<console>
+
###i## genkernel --oldconfig --no-mrproper all
+
</console>
+
 
+
== Restore grub bootloader ==
+
<console>
+
###i## grub-install --no-floppy /dev/sda
+
###i## boot-update
+
</console>
+
 
+
Adjust the device as required if installing to another location.
+
 
+
== Reboot ==
+
<console>
+
###i## reboot
+
</console>
+

Revision as of 17:52, January 16, 2014

WARNING: Work in progress. Do not edit this article unless you are the original author.


Refresh on TCP/IP model

When the ARPANet (a packet oriented network) was born in those good old seventies, engineers had to solve the problem of making computers being able to exchange packets of information over the network and they invented in 1974 something you are now using to view this page: TCP/IP! TCP/IP is a collection of various network protocols, being organized as a stack. Just like your boss does not do everything in the company and delegates at lower levels which in turn delegates at an even more lower level, no protocol in the TCP/IP suite takes all responsibilities, they are working together in a hierarchical and cooperative manner. A level of the TCP/IP stack knows what its immediate lower subordinate can do for it and whatever it will do will be done the right way and will not worry about the manner the job will be done. Also the only problem for a given level of the stack is to fulfill its own duties and deliver the service requested by the upper layer, it does not have to worry about the ultimate goal of what upper levels do.

<illustration goes here TCP/IP model>

The above illustration sounds horribly familiar : yes, it is sounds like this good old OSI model. Indeed it is a tailored view of the original OSI model and it works the exact same way: so the data sent by an application A1 (residing on computer C1) to another application A2 (residing on computer C2) goes through C1's TCP/IP stack (from top to bottom), reach the C1's lower layers that will take the responsibility to move the bits from C1 to C2 over a physical link (electrical or lights pulses, radio waves...sorry no quantum mechanism yet) . C2's lower layers will receive the bits sent by C1 and pass what has been received to the C2's TCP/IP stack (bottom to top) which will pass the data to A2. If C1 and C2 are not on the same network the process is a bit more complex because it involves relays (routers) but the global idea remains the same. Also there is no shortcuts in the process : both TCP/IP stacks are crossed in their whole, either from top to bottom for the sender or bottom to top for the receiver. The transportation process in itself is also absolutely transparent from an application's point of view: A1 knows it can rely on the TCP/IP stack to transmits some data to A2, how the data is transmitted is not its problem, A1 just assumes the data can be transmitted by some means. The TCP/IP stack is also loosely coupled to a particular network technology because its frontier is precisely the physical transportation of bits over a medium and so the physical network's technology, just the same way A1 does not care about how the TCP/IP stack will move the data from one computer to another. The TCP/IP stack itself does not care about the details about how the bits are physically moved and thus it can work with any network technology no matter the technology is Ethernet, Token Ring or FDDI for example.

The Internet layer

The goal of this article being more focused on calculation of addresses used at the Internet layer so let's forget the gory details of the TCP/IP stack works (you can find an extremely detailed discussion in How the TCP/IP stack works... to be written...). From here, we assume you have a good general understanding of its functionalities and how a network transmission works. As you know the Internet layer is responsible to handle logical addressing issues of a TCP segment (or UDP datagram) that has either to be transmitted over the network to a remote computer or that has been received from the network from a remote computer. That layer is governed by a set of strict set rules called the Internet Protocol or IP originally specified by [RFC 791] in september 1981. What is pretty amazing with IP is that, although its original RFC has been amended by several others since 1981, its specification remains absolutely valid! If have a look at [RFC 791] you won't see "obsoleted". Sure IPv4 reached its limits in this first half the XXIst century but will remains in the IT landscape for probably several years to not say decades (you know, the COBOL language....). To finish on historical details, you might find interesting know that TCP/IP was not the original protocol suite used on the ARAPANet, it superseded in 1983 another protocol suite the Network Control Program. NCP looks like, from our point of view, quite prehistoric but it is of big importance as it established a lot of concepts still in use today : PDU, splitting an address in various components, connection management and so on comes from NCP. Historical reward for those who are still reading this long paragraph: first, even a computer user was addressable in NCP messages second even in 1970 the engineers were concerned by network congestions issues (this page).

Let's go back to those good old seventies: the engineers who designed the Internet Protocol retained a 32 bits addressing scheme for IP and, afterall, the ARAPnet will never have the need to be able to address billions of hosts! If you look at some ARAPANet diagrams it counted less than 100 hosts in

who would ever need millions of addresses afterall? So in theory with those 32 bits we can have around 4 billions of computers within that network and arbitrarily retain that the very first connected computer must be given the number "0", the second one "1", the third one "2" and so on until we exhaust the address pool at number 4294967295 giving no more than 4294967296 (2^32) computers on that network because no number can be a duplicate.

Classful and classless networks

Those addresses follows the thereafter logic:

32 bits (fixed length)
Network part (variable length of N bits ) Host part (length : 32 - N bits)
  • The network address : this part is uniquely assigned amongst all of the organizations in the world (i.e. No one in the world can hold the same network part)
  • The host address : unique within a given network part

So in theory we can have something like this (remember the network nature is not to be unique, it hs to be be a collection of networks  :

  • Network 1 Host 1


Just like your birthday cake is divided in more or less smaller parts depending on how guests' appetite, the IPv4 address space has also been divided into more or less smaller parts just because organizations needs more or less computers on their networks. How to make this possible? Simply by dedicating a variable number of bits to the network part! Do you see the consequence? An IPv4 address being always 32 bits wide, the more bits you dedicate to the network part the lesser you have for the host part and vice-versa, this is a tradeoff, always. Basically, having more bits in :

  • the network part : means more networks possible at the cost of having less hosts per network
  • the host part : means less networks but more hosts per network

It might sounds a bit abstract but let's take an example : imagine we dedicate only 8 bits for the network part and the remaining 24 for the hosts part. What happens? First if we only


Is the network part assigned by each organization to itself? Of course not! Assignment are coordinated at the worldwide level by what we call Regional Internet Registries or RIRs which, in turn, can delegate assignments to third-parties located within their geographic jurisdiction. Those latter are called Local Internet Registries or LIRs (the system is detailed in RFC 7020). All of those RIRs are themselves put under the responsibility of now now well-known Internet Assigned Numbers Authority or IANA. As of 2014 five RIR exists :

  • ARIN (American Registry for Internet Numbers) : covers North America
  • LACNIC (Latin America and Caribbean Network Information Centre): covers South America and the Caribbean
  • RIPE-NCC (Réseaux IP Européens / or RIPE Network Coordination Centre): covers Europe, Russia and middle east
  • Afrinic (Africa Network Information Center) : covers the whole Africa
  • APNIC (Asian and Pacific Network Information Centre) : covers oceania and far east.