Quantcast
Channel: Open-Source Routing and Network Simulation
Viewing all 43 articles
Browse latest View live

Saving a Cloonix network topology

$
0
0

The Cloonix network simulator has been updated to version 29, which adds the ability to save network simulation topologies and node configurations to a directory.

Users may save a network topology and all node configurations to a directory of their choice. They may also load saved topologies into Cloonix so they can restore a network scenario they previously created. The save function of Cloonix v29 supports copy-on-write filesystems and also allows users to save the full filesystems of nodes, if they wish.

This post will work through a detailed tutorial showing how to save, load, and re-save topologies and node configurations using the Cloonix GUI or command-line interface.

Different methods to save a Cloonix project

In this tutorial we will show three ways Cloonix may be used to save filesystems and network topologies:

  1. Create a new base filesystem by starting a VM in Cloonix, loading software and configurations, then saving either a full VM disk image or a derived VM disk image.
    • This simple case is useful when upgrading or modifying disk images that will be used in simulation scenarios.
       
  2. Start the Cloonix graph, set up the VMs, load software, and configure them. Then save the topology and filesystems.
    • This method is simple to understand and reliable, but results in large files sizes.
       
  3. Write a script of Cloonix commands that sets up the VMs, loads software, and configures everything remotely.
    • This results in small file sizes because only the script is needed to create the topology, but it can take longer to set up because the script downloads filesystems and software.
    • The script may be written from scratch, or you may modify a script created by a previously saved topology.

New Cloonix commands

Cloonix version 29 adds the sav command to save network topologies. To see all the cloonix commands, run the command cloonix_ctrl , as shown in the example below.

$ cloonix_ctrl nemo

|-------------------------------------------------------|
| cloonix_ctrl nemo                                     |
|-------------------------------------------------------|
|  kil  : Destroys all objects, cleans and kills switch |
|  rma  : Destroys all cloonix objects and graphs       |
|  dmp  : Dump topo                                     |
|  lst  : List commands to replay topo                  |
|  add  : Add one cloonix object to topo                |
|  del  : Del one cloonix object from topo              |
|  sav  : Save sub-menu                                 |
|  cnf  : Configure a cloonix object                    |
|  pkt  : Counters of packet throughput                 |
|  mud  : Dialog with mulan, mutap, musnf and mueth     |
|  hop  : dump 1 hop debug                              |
|  pid  : dump pids of processes                        |
|  evt  : prints events                                 |
|  sys  : prints system stats                           |
|-------------------------------------------------------|

The cloonix sav command has four subcommands. You may see the available subcommands by entering an incomplete sav command at the command line:

$ cloonix_ctrl nemo sav

|-------------------------------------------------------|
| cloonix_ctrl nemo sav                                 |
|-------------------------------------------------------|
|  derived    : Save derived qcow2                      |
|  full       : Save backing and derived in one qcow2   |
|  topo       : Save all derived and replay script      |
|  topo_full  : Save all full and replay script         |
|-------------------------------------------------------|

Also, note that a few of the other cloonix_ctrl commands have changed again in version 29. For example, the version 28 -k command used to stop and clean up a cloonix session is changed to kil in version 29.

Saving a single VM filesystem

The simplest case involves saving only a VM’s filesystem. In this case, we use either the sav full or the sav derived commands to save a single filesystem.

The Cloonix project provides VM disk images we can download and use in our network emulation scenarios. We may want to add some software to a VM and then save a copy of it as a new base VM.

For example, we may have different base VMs for the virtual PC nodes and for the virtual Router nodes in our network. This would simplify setting up a network scenario because we can do all the router setup on one base virtual machine and then every virtual router we start using that base VM inherits the basic router setup.

In this tutorial, we will use two node types: a PC and a Router.

  • The PC nodes will use the jessie.qcow2 disk image we downloaded from the Cloonix web site.
  • The Router nodes will use a modified version of the jessie.qcow2 disk image that we will create by adding software and modifying configuration files, and then save it under a new name.

First, create a cloonix topology with one KVM based on the jessie.qcow2 disk image. Start cloonix with the commands:

$ cloonix_startnet nemo
$ cloonix_graph nemo

Then configure a KVM using the following settings:

Cloonix KVM setup

Cloonix KVM setup

Next add the KVM to the Cloonix graph and connect interface 2 (eth2) to the cloonix_slirp_admin_lan LAN object. Connect the highest-numbered interface to the cloonix_slirp_admin_lan because only the highest numbered interface works when connected to that object.

Connect Cloonix KVM to admin LAN

Connect Cloonix KVM to admin LAN

After the KVM boots up, double-click on it to open an xterm and enter the following command to set up networking on interface eth2:

# dhclient eth2

Now the virtual machine will connect to the internet, assuming the host computer has a connection to the internet.

Router setup

We will modify this virtual machine so it can support OSPF routing using the quagga router software. Install the quagga package and then configure the Quagga VTY shell. This will create the basic setup for a router.

In the virtual machine’s open xterm window enter the commands:

# apt-get update
# apt-get install quagga

Then, configure the Quagga daemons by editing the file /etc/quagga/daemons and start the zebra and ospfd daemons.

# nano /etc/quagga/daemons

Modify the file so it looks like:

zebra=yes
bgpd=no
ospfd=yes
ospf6d=no
ripd=no
ripngd=no
isisd=no
babeld=no

Save the file and quit the editor.

Create config files for the zebra and ospfd daemons. These can be blank files.

# touch /etc/quagga/ospfd.conf
# touch /etc/quagga/zebra.conf

Set up environment variables so we avoid the vtysh END problem. Edit the /etc/bash.bashrc file:

# nano /etc/bash.bashrc

Add the following line at the end of the file:

export VTYSH_PAGER=more

Save the file and quit the editor. Then, edit the /etc/environment file:

# nano /etc/environment

Then add the following line to the end of the file:

VTYSH_PAGER=more

Save the file and quit the editor.

Save the new virtual machine filesystem

Now that our modifications are complete, we will save the virtual machine’s filesystem as a new disk image in the ~/cloonix/bulk directory, where it can be used in future projects. We will call it jessie-router.qcow2

We will save it as a full disk image because we will use this as a base VM for routers. Right-click on the KVM in the Cloonix graph window and select save whole rootfs from the menu that appears.

Saving a VM

Saving a VM

In the dialogue box, enter the path and filename of the new filesystem. I named it jessie-router.qcow2 and stored it in the ~/cloonix/bulk directory where it can be used as a base filesystem for experiments.

Save to bulk directory

Save to bulk directory

If you prefer to use the Cloonix command-line interface, enter the following command into the host computer’s teminal window:

$ cloonix_ctrl nemo sav full Cloon1 ~/cloonix/bulk/jessie-router.qcow2

Quit the topology by killing cloonix with the kil command:

$ cloonix_ctrl nemo kil

Next step

Now we should have two disk images in the ~/cloonix/bulk directory. The image jessie.qcow2 and the image jessie-router.qcow2.

$ cd ~/cloonix?bulk
$ ls
jessie.qcow2          jessie-router.qcow2  

We will use these as building blocks for a more complex network simulation scenario.

Saving a Cloonix topology

In the example below, we will create a topology of three PCs connected to a network of three routers.

Creating a basic topology

I created the network below, consisting of six VMs connected together, and also connected to the cloonix_slirp_admin_lan.

  • The PC nodes use the jessie.qcow2 filesystem.
  • The Router nodes use the jessie-router.qcow2 filesystem we created and saved in the Saving a single VM filesystem section, above.
  • Every node should have the ballooning option checked so it uses less RAM on the host computer.

I won’t cover the steps for creating a topology on the Cloonix graph in this post. If you need a tutorial on using the Cloonix graph, please see my previous posts about Cloonix and the Cloonix documentation.

New tolopogy

New tolopogy

Now we have a basic network topology that could be used as a base for future experiments. All the nodes are connected to each other but no networking information has been configured yet.

We do not need to use the cloonix_slirp_admin_lan LAN right now but we connect it to al the KVM nodes in case we need to add additional software in the future. We can hide the cloonix_slirp_admin_lan LAN and all attached interfaces, to make the network topology look cleaner.

Hide unused LAN and interfaces

Hide unused LAN and interfaces

Save the topology

When we save a network topology, Cloonix creates a new directory and saves the filesystem of each node and a script that will rebuild the topology.

To save a cloonix topology, you may use the Save Topo command in the Cloonix graph menu.

Save the topology from the Cloonix GUI

Save the topology from the Cloonix GUI

Enter the name of the directory into which the topology script and COW filesystems will be saved.

The topology directory name

The topology directory name

The directory must not exist. If a directory of the same name already exists, the save will fail.

Saving using the Cloonix CLI

If you wish, you may use the Cloonix command line tool, cloonix_ctrl to save the topology. In the example above, save the topology using the command:

$ cloonix_ctrl nemo sav topo ~/simple-ospf

Where simple-ospf is the directory into which Cloonix saves the topology. Remember, if the directory simple-ospf already exists the save will fail.

The topology directory

Look at the directory created when we saved the Cloonix topology. We will see a file for each VM’s filesystem and an executable script file nemo.sh that contains the commands required to rebuild the topology. When starting this topology in the future, we will run the shell script nemo.sh from a terminal window.

$ cd simple-ospf
$ ls
Router-1.qcow2  Router-3.qcow2  PC-1.qcow2  PC-3.qcow2
Router-2.qcow2  nemo.sh       PC-2.qcow2

Note that the default saving method is to save derived filesystems, which save only the changes compared to a base filesystem image, which in this case is the jessie.qcow2 file in the ~/cloonix/bulk directory.

Even so, this still creates large files over 500 MB for each derived filesystem.

Open the topology script file, nemo.sh, in a text editor to see the commands that will run to produce the topology. Below, we list the top 10 lines of the file:

#!/bin/bash
#cloonix_startnet nemo
#cloonix_graph nemo
cloonix_ctrl nemo add kvm Router-1 1000 1 classic,classic,classic /home/brian/simple-ospf/Router-1.qcow2 --persistent &
sleep 5
cloonix_ctrl nemo add lan eth Router-1 0 lan06
cloonix_ctrl nemo add lan eth Router-1 1 lan01
cloonix_ctrl nemo add lan eth Router-1 2 lan03
cloonix_ctrl nemo add kvm Core-2 1000 1 classic,classic,classic /home/brian/simple-ospf/Router-2.qcow2 --persistent &

Note that the second and third lines are commented out so you must already have started both the Cloonix server and GUI before running the script. If you wish to start Cloonix using the script, just remove the hashes at the front of the second and third lines in this script. So they look like:

#!/bin/bash
cloonix_startnet nemo
cloonix_graph nemo

Kill the current simulation

Now we are done setting up our basic simulation topology. Imagine we need to stop for a while and start again tomorrow. We may kill the running simulation with the following Cloonix command:

$ cloonix_ctrl nemo kil

This stops all the nodes, deletes all Cloonix devices, and stops both the Cloonix server and Cloonix graph processes. Now we can shut off our computer and re-load this network topology at a later date.

Loading an existing topology

Imagine that you want to load a previously-saved topology. You load a cloonix network topology by running the topology script in the directory you used to save the topology.

First, ensure the cloonix server nemo and graph are running, or edit the script so it will start the cloonix server and graph when you run it.

In this case, where we previously saved the topology in the simple-ospf-v2 directory in my $HOME directory. The cloonix server and graph have not yet been started and the script has not been modified to start them, so I will start them before running the script. I ran the following commands to load the topology again:

$ cloonix_startnet nemo
$ cloonix_graph nemo
$ ~/simple-ospf-v2/nemo.sh

Network configurations

At this point, we created a base topology and upgrades some nodes with required software. Then we saved the topology so we can use it as a base for future experiments.

One experiment we could perform would be to set up a simple OSPF configuration and test that each node can reach every other node in the network.

Now that we have rebuilt this network topology by running it’s topology script, let’s add configurations to each node as described below.

PC-1

Double-click on the PC-1 node in the Cloonix graph to open an xterm. In the PC-1 xterm window, use a text editor to add the following lines to the /etc/network/interfaces file, then save the file. The node supports both the vi and nano text editors.

auto eth0
iface eth0 inet static
   address 192.168.1.1
   netmask 255.255.255.0

Then, add a static route the sends all traffic in the 102.168.0.0/16 network out eth0.

# ip route add 192.168.0.0/16 via 192.168.1.254 dev eth0

To make this static route available after a system reboot, add the following line to the /etc/rc.local file

ip route add 192.168.0.0/16 via 192.168.1.254 dev eth0

Restart the networking service

# /etc/init.d/networking restart
PC-2

On PC-2, add the following lines to the /etc/network/interfaces file, then save the file.

auto eth0
iface eth0 inet static
   address 192.168.2.1
   netmask 255.255.255.0

Then, add a static route the sends all traffic in the 102.168.0.0/16 network out eth0.

# ip route add 192.168.0.0/16 via 192.168.2.254 dev eth0

To make this static route available after a system reboot, add the following line to the /etc/rc.local file

ip route add 192.168.0.0/16 via 192.168.2.254 dev eth0

Restart the networking service

# /etc/init.d/networking restart
PC-3

On PC-3, add the following lines to the /etc/network/interfaces file, then save the file.

auto eth0
iface eth0 inet static
   address 192.168.3.1
   netmask 255.255.255.0

Then, add a static route the sends all traffic in the 102.168.0.0/16 network out eth0.

# ip route add 192.168.0.0/16 via 192.168.3.254 dev eth0

To make this static route available after a system reboot, add the following line to the /etc/rc.local file

ip route add 192.168.0.0/16 via 192.168.3.254 dev eth0

Restart the networking service

# /etc/init.d/networking restart
Router-1

Start the Quagga shell with the command vtysh on each router:

# vtysh

On router Router-1, enter the following Quagga commands:

configure terminal
router ospf
 network 192.168.1.0/24 area 0
 network 192.168.100.0/24 area 0 
 network 192.168.101.0/24 area 0 
 passive-interface eth0    
 exit
interface eth0
 ip address 192.168.1.254/24
 exit
interface eth1
 ip address 192.168.100.1/24
 exit
interface eth2
 ip address 192.168.101.2/24
 exit
exit
ip forward
write
exit
Router-2

On router Router-2, enter the following Quagga commands:

configure terminal
router ospf
 network 192.168.2.0/24 area 0
 network 192.168.100.0/24 area 0 
 network 192.168.102.0/24 area 0 
 passive-interface eth0    
 exit
interface eth0
 ip address 192.168.2.254/24
 exit
interface eth1
 ip address 192.168.100.2/24
 exit
interface eth2
 ip address 192.168.102.2/24
 exit
exit
ip forward
write
exit
Router-3

On router Router-3, enter the following Quagga commands:

configure terminal
router ospf
 network 192.168.3.0/24 area 0
 network 192.168.101.0/24 area 0 
 network 192.168.102.0/24 area 0 
 passive-interface eth0    
 exit
interface eth0
 ip address 192.168.3.254/24
 exit
interface eth1
 ip address 192.168.101.1/24
 exit
interface eth2
 ip address 192.168.102.1/24
 exit
exit
ip forward
write
exit

Re-saving a topology

After making all the above configuration changes, you may want to save the topology again to preserve the changes you made. However, you must use a new directory name because you cannot save over an existing topology.

For example, if we loaded a topology named simple-ospf and wanted to save it again after making changes, we will have to use the Save Topo menu command and save it to a new directory like simple-ospf-v2.

$ cloonix_ctrl nemo sav topo ~/simple-ospf-v2

Cleaning up old versions

If you wish to remove old versions of topologies, delete the corresponding directory. For example:

$ rm -r simple-ospf

But, I personally find it better to keep older versions of topologies because the serve as “snapshots” that I can go back to if I need to.

Next steps

Now we have a Cloonix topology directory with filesystems and a topology script. The contents of this directory will create a simulated simple OSPF network that can be used as a basis for experimentation and study.

It is possible to share this setup with others by bundling the contents of the directory into a compressed .tar archive and placing the file on a server. The file would be much too large to share by e-mail.

In the next section, we will show how to start a Cloonix topology using just a script, which would be small enough to send in an e-mail.

Building a Cloonix topology script

There is a way to create a topology file that is very compact and that can be used to build any topology. If we keep the filesystems used in the topology on an accessible server, we can share a Cloonix topology script that contains commands that will download the needed filesystems, make the necessary configurations, and create — in this example — a simulated simple OSPF network.

See the cloonix documentation for some demo scripts that can be used as a basis for creating your own scripts. (Note that the link may change as new versions of Cloonix are released. See section 2.8 of the Cloonix documentation.)

In the example below, we will write a script that creates the same simple OSPF topology with the same configurations as the Saving a Cloonix topology section above.

Cloonix commands for scripts

In addition to the cloonix_ctrl commands, Cloonix provides two other commands that are useful for interacting with virtual machines from the host computer’s command line. These may also be integrated into scripts to add configurations to nodes.

cloonix_dbscp — A version of the standard Linux SCP command, modified to work with Cloonix. Use it from the host computer’s terminal window or in a script to copy files from the host computer to a Cloonix VM, or vice-versa. Example of usage:

cloonix_dbscp nemo <path-and-file> vm_name:<path-and-file>

For example, if we create a file named tests.txt on our host computer in the ~/Documents directory, we can copy it to the root directory on a virtual node Router-3 using the command:

cloonix_dbscp nemo ~/Documents/tests.txt Router-3:/tests.txt

cloonix_dbssh — A version of the standard Linux SSH command, modified to work with Cloonix. Use it to run commands on a Cloonix virtual machine from the host computer’s terminal window or from a script. Example of usage:

cloonix_dbssh nemo vm_name "<shell command>"

For example, to create a file test-file.txt on the Router-3 node using cloonix_dbssh type the following in the host computer’s terminal window:

cloonix_dbssh nemo Router-3 "touch /test-file.txt"

And, to write some text to the end of that file. enter the command:

$  cloonix_dbssh nemo PC-1 "echo 'This is the first line' /test-file.txt"
$  cloonix_dbssh nemo PC-1 "echo 'This is the second line' /test-file.txt"

Modify an existing script

The easiest way to create a Cloonix script is to modify an existing script or capture the output of the Cloonix topology list command, lst.

For example, after creating a topology, run the lst command and pipe the output to a file called script.sh:

$ cloonix_ctrl nemo lst > script.sh

And then start editing the file script.sh file to add the necessary lines to turn it into a useful script.

Alternatively, you could save the topology using the Cloonix sav command and then copy the shell script in the topology directory to a new filename and then start editing it.

For example, copy the simple OSPF topology script we ceated above:

$ cp ~/simple-ospf-v2/nemo.sh ~/script.sh

The script.sh file will contain all the basic topology information, such as the virtual machine disk images and interface information, the connections between nodes and LANs, and the locations of nodes on the graph. Now, we need to add all the node configuration information to the script.

Example script

Here we will discuss some of the details of writing the Cloonix script. See Appendix B for the full script.

I created the script by starting with a copy of the nemo.sh script called script.sh.

The top of the file script.sh has two lines commented out. Remove the comment hash marks so these commands start the Cloonix server and graph GUI.

#!/bin/bash
cloonix_startnet nemo
cloonix_graph nemo

The first half of the script is copied from the original topology script, or from the output of the cloonix_ctrl nemo lst command. It starts up KVMs with the correct settings, adds interfaces and LANs, positions nodes in the GUI, and hides the cloonix_slirp_admin_lan and connected interfaces.

I made minor changes to the copied script. I increased the duration of sleep timers and added blank lines and comments to make the script more readable. I moved a few lines around so they are grouped with similar actions.

In the second half of the file, I added the commands that will update the configuration files on the nodes to create a functioning OSPF network.

How to create or modify files on Cloonix KVM nodes

We may update files on the virtual nodes using one of two methods:

  • Use the cloonix_dbssh command to execute the echo or sed commands on the virtual node as a root user. Each command adds one more line to the already-exisiting configuration files on that node.
  • Or, create temporary files on the host computer containing the required configurations and then copy them to the virtual node using the cloonix_dpscp command. This results in a script file that is easier for humans to read.

In the modified script, I used both methods. I used the cloonix_dbssh command to set up the configuration files on Router-1 and the PC nodes. I used the cloonix_dpscp command to set up the configuration files on Router-2 and Router-3.

cloonix_dbssh examples

The cloonix_dbssh command executing echo commands is useful for adding short configurations to empty files, or to the end of existing files. Below is an example of setting up the etc/network/interfaces file on node PC-1 in the script:

cloonix_dbssh nemo PC-1 "echo 'auto eth0' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-1 "echo 'iface eth0 inet static' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-1 "echo '   address 192.168.1.1' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-1 "echo '   netmask 255.255.255.0' >>/etc/network/interfaces"

Use the sed command to insert lines into existing files or to modify lines in files. In this script, we use sed to insert a static route command into the rc.local file. For example:

cloonix_dbssh nemo PC-1 "sed -i '/exit 0/i ip route add 192.168.0.0/16 via 192.168.1.254 dev eth0' /etc/rc.local"
cloonix_dbscp examples

To create configuration files on the host computer in the script, use the Linux bash shell here documents syntax to create the file, then copy it to the virtual node. Below is an example of adding configurations to an ospfd.conf file for Router-2, then copying it to router 3:

cat > /tmp/router-2/ospfd.conf << EOF
interface eth0
interface eth1
interface eth2
interface lo
router ospf
 passive-interface eth0
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.101.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
line vty
EOF

cloonix_dbscp nemo -r /tmp/router-2/* Router-2:/etc/quagga

Add these commands to the script to set up the nodes with the configurations we need. We are chooisng to update configuration files to make our setup permanent after saving the topology.

Again, please see the whole script in Appendix B.

Running a Cloonix topology script

To set up a Cloonix topology from a script like this, make it executable and then run it. First, ensure that the filesystems you need are already in the correct directory — assuming the script does not download them for you.

$ chown +x script.sh
$ ./script.sh

Conclusion

We showed how to save a Cloonix topology and any configuration changes made to the filesystems in the topology. We showed how to reload a saved topology.

When sharing topologies with other users, you may find that the large files generated by saving filesystems are hard to share. We showed how to create a standalone script that builds a topology by downloading the files it needs and adding configurations to nodes. This script is a small text file that can be shared via e-mail.

 

Appendix A: Upgrade to Cloonix v29

We require Cloonix v29 to save topologies.

If you have a previous version of Cloonix installed, run the following commands to download the version 29.08 source code, compile it, and install it.

$ cd ~/Downloads
$ wget http://cloonix.fr/cloonix/built_cloonix-2016-01-15/cloonix_cli-29.08.tar.gz
$ wget http://cloonix.fr/cloonix/built_cloonix-2016-01-15/cloonix_serv-29.08.tar.gz
$ tar -xvf cloonix_cli-29.08.tar.gz
$ tar -xvf cloonix_serv-29.08.tar.gz
$ rm cloonix_cli-29.08.tar.gz
$ rm cloonix_serv-29.08.tar.gz
$ cd ~/cloonix_cli-29.08
$ ./doitall
$ ./install_cli.sh
$ cd ~/cloonix_serv-29.08
$ ./doitall    
$ ./install_serv.sh
$ source ~/.bashrc

If you are installing cloonix for the first time, please see my post about Cloonix version 28, for the correct procedure to install prerequisite packages.

Install filesystems

We need a base filesystem for my topology. I chose the Debian Jessie filesystem, jessie.qcow, provided by the Cloonix development team:

$ cd ~/cloonix/bulk
$ wget http://cloonix.fr/vm/bulk-2016-01-01/jessie.qcow2.xz
$ unxz jessie.qcow2.xz

Start Cloonix

Whenever you want to start Cloonix, use the following commands.

$ cloonix_startnet nemo
$ cloonix_graph nemo

In the example above, we are using the server named nemo. Other servers may be used and are configured in the file, ~/cloonix/cloonix_conf.

 

Appendix B: Stand-alone Cloonix script example

The following listing shows a full listing of a script that will set up a simple network topology and configure the nodes to run the OSPF routing protocol. This script downloads the filesystems jessie.qcow2 from the Cloonix web site, then configures each node in the scenario. The script stands alone and does not need to be bundled with a filesystem when sharing it with others.

#!/bin/bash

#-------------------------------------------------------
# Download a base filesystem. In this example, we
# get the file from the Cloonix web site. This is a
# large file so it will take a long time. You should 
# consider posting a base filesystem on an internal
# server. 
#-------------------------------------------------------
echo "Downloading jessie.qcow2.xz from http://cloonix.fr"
cd ~/cloonix/bulk
wget http://cloonix.fr/vm/bulk-2016-01-01/jessie.qcow2.xz -q --show-progress
unxz jessie.qcow2.xz
echo "jessie.qcow2 filesystem ready"

#-------------------------------------------------------
# Start cloonix. Comment-out these two lines if we will
# already have Cloonix started before running this 
# script 
#-------------------------------------------------------

echo "Starting Cloonix"
cloonix_startnet nemo
cloonix_graph nemo
echo "Cloonix started"

#-------------------------------------------------------
# Start KVMs and define interfaces
# The sleep timers should be set to a value that works
# best on your computer. Since I am running this
# on a 5-year-old laptop, I set the sleep timers to a 
# higher value of 15 seconds 
#-------------------------------------------------------

echo "Building topology"

# Router-1
cloonix_ctrl nemo add kvm Router-1 1000 1 classic,classic,classic,classic /home/brian/cloonix/bulk/jessie.qcow2 --balloon &
sleep 15
cloonix_ctrl nemo add lan eth Router-1 0 lan06
cloonix_ctrl nemo add lan eth Router-1 1 lan01
cloonix_ctrl nemo add lan eth Router-1 2 lan03
cloonix_ctrl nemo add lan eth Router-1 3 cloonix_slirp_admin_lan

# Router-2
cloonix_ctrl nemo add kvm Router-2 1000 1 classic,classic,classic,classic /home/brian/cloonix/bulk/jessie.qcow2 --balloon &
sleep 15
cloonix_ctrl nemo add lan eth Router-2 0 lan04
cloonix_ctrl nemo add lan eth Router-2 1 lan01
cloonix_ctrl nemo add lan eth Router-2 2 lan02
cloonix_ctrl nemo add lan eth Router-2 3 cloonix_slirp_admin_lan

# Router-3
cloonix_ctrl nemo add kvm Router-3 1000 1 classic,classic,classic,classic /home/brian/cloonix/bulk/jessie.qcow2 --balloon &
sleep 15
cloonix_ctrl nemo add lan eth Router-3 0 lan05
cloonix_ctrl nemo add lan eth Router-3 1 lan03
cloonix_ctrl nemo add lan eth Router-3 2 lan02
cloonix_ctrl nemo add lan eth Router-3 3 cloonix_slirp_admin_lan

# PC-1
cloonix_ctrl nemo add kvm PC-1 1000 1 classic,classic /home/brian/cloonix/bulk/jessie.qcow2 --balloon &
sleep 15
cloonix_ctrl nemo add lan eth PC-1 0 lan06
cloonix_ctrl nemo add lan eth PC-1 1 cloonix_slirp_admin_lan

# PC-2
cloonix_ctrl nemo add kvm PC-2 1000 1 classic,classic /home/brian/cloonix/bulk/jessie.qcow2 --balloon &
sleep 15
cloonix_ctrl nemo add lan eth PC-2 0 lan04
cloonix_ctrl nemo add lan eth PC-2 1 cloonix_slirp_admin_lan

# PC-3
cloonix_ctrl nemo add kvm PC-3 1000 1 classic,classic /home/brian/cloonix/bulk/jessie.qcow2 --balloon &
sleep 15
cloonix_ctrl nemo add lan eth PC-3 0 lan05
cloonix_ctrl nemo add lan eth PC-3 1 cloonix_slirp_admin_lan

#-------------------------------------------------------
# Stop motion
#-------------------------------------------------------
cloonix_ctrl nemo cnf lay stop
sleep 1

#-------------------------------------------------------
# Set size of Cloonix Graph window
#-------------------------------------------------------
cloonix_ctrl nemo cnf lay width_height 574 489
sleep 1
cloonix_ctrl nemo cnf lay scale 216 212 574 489
sleep 1

#-------------------------------------------------------
# Move nodes to their final places in the graph
#-------------------------------------------------------
cloonix_ctrl nemo cnf lay abs_xy_kvm PC-3 218 402
cloonix_ctrl nemo cnf lay abs_xy_eth PC-3 0 0
cloonix_ctrl nemo cnf lay abs_xy_eth PC-3 1 239
cloonix_ctrl nemo cnf lay abs_xy_kvm PC-2 425 45
cloonix_ctrl nemo cnf lay abs_xy_eth PC-2 0 194
cloonix_ctrl nemo cnf lay abs_xy_eth PC-2 1 254
cloonix_ctrl nemo cnf lay abs_xy_kvm PC-1 1 49
cloonix_ctrl nemo cnf lay abs_xy_eth PC-1 0 94
cloonix_ctrl nemo cnf lay abs_xy_eth PC-1 1 37
cloonix_ctrl nemo cnf lay abs_xy_kvm Router-3 221 253
cloonix_ctrl nemo cnf lay abs_xy_eth Router-3 0 154
cloonix_ctrl nemo cnf lay abs_xy_eth Router-3 1 264
cloonix_ctrl nemo cnf lay abs_xy_eth Router-3 2 15
cloonix_ctrl nemo cnf lay abs_xy_eth Router-3 3 301
cloonix_ctrl nemo cnf lay abs_xy_kvm Router-2 291 118
cloonix_ctrl nemo cnf lay abs_xy_eth Router-2 0 48
cloonix_ctrl nemo cnf lay abs_xy_eth Router-2 1 227
cloonix_ctrl nemo cnf lay abs_xy_eth Router-2 2 160
cloonix_ctrl nemo cnf lay abs_xy_eth Router-2 3 298
cloonix_ctrl nemo cnf lay abs_xy_kvm Router-1 131 128
cloonix_ctrl nemo cnf lay abs_xy_eth Router-1 0 260
cloonix_ctrl nemo cnf lay abs_xy_eth Router-1 1 57
cloonix_ctrl nemo cnf lay abs_xy_eth Router-1 2 117
cloonix_ctrl nemo cnf lay abs_xy_eth Router-1 3 5
cloonix_ctrl nemo cnf lay abs_xy_lan lan05 216 325
cloonix_ctrl nemo cnf lay abs_xy_lan lan02 262 186
cloonix_ctrl nemo cnf lay abs_xy_lan lan04 359 84
cloonix_ctrl nemo cnf lay abs_xy_lan lan03 172 191
cloonix_ctrl nemo cnf lay abs_xy_lan lan01 208 117
cloonix_ctrl nemo cnf lay abs_xy_lan lan06 69 85
cloonix_ctrl nemo cnf lay abs_xy_lan cloonix_slirp_admin_lan 184 -2
sleep 5

#-------------------------------------------------------
# Hide the cloonix_slirp_admin_lan and connected 
# interfaces
#-------------------------------------------------------
cloonix_ctrl nemo cnf lay hide_lan cloonix_slirp_admin_lan 1
cloonix_ctrl nemo cnf lay hide_eth Router-1 3 1
cloonix_ctrl nemo cnf lay hide_eth Router-2 3 1
cloonix_ctrl nemo cnf lay hide_eth Router-3 3 1
cloonix_ctrl nemo cnf lay hide_eth PC-1 1 1
cloonix_ctrl nemo cnf lay hide_eth PC-2 1 1
cloonix_ctrl nemo cnf lay hide_eth PC-3 1 1

#-------------------------------------------------------
# wait 30 seconds for all VMs to finish starting up
#-------------------------------------------------------
echo "Waiting 30 seconds for all nodes to start"
sleep 30
echo "Topology is ready"

#-------------------------------------------------------
# Install quagga on the three routers
# Each router is already connected to the slirp-lan
# on its highest-numbered interface, eth3 
#-------------------------------------------------------

echo "Installing quagga software"

cloonix_dbssh nemo Router-1 "dhclient eth3"
cloonix_dbssh nemo Router-1 "apt-get update"
cloonix_dbssh nemo Router-1 "apt-get --allow-unauthenticated --assume-yes install quagga"

cloonix_dbssh nemo Router-2 "dhclient eth3"
cloonix_dbssh nemo Router-2 "apt-get update"
cloonix_dbssh nemo Router-2 "apt-get --allow-unauthenticated --assume-yes install quagga"

cloonix_dbssh nemo Router-3 "dhclient eth3"
cloonix_dbssh nemo Router-3 "apt-get update"
cloonix_dbssh nemo Router-3 "apt-get --allow-unauthenticated --assume-yes install quagga"

sleep 30
echo "Completed software install"

#-------------------------------------------------------
# Write quagga config files on Router-1.
#
# One method is to use cloonix_dbssh to execute echo or
# sed commands on at a time to build configuration files
# line-by-line.
#-------------------------------------------------------

echo "starting Router-1 configuration"

# Router-1 ospfd.conf file
cloonix_dbssh nemo Router-1 "echo 'interface eth0' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo 'interface eth1' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo 'interface eth2' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo 'interface eth3' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo 'interface lo' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo 'router ospf' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo ' passive-interface eth0' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo ' network 192.168.1.0/24 area 0.0.0.0' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo ' network 192.168.100.0/24 area 0.0.0.0' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo ' network 192.168.101.0/24 area 0.0.0.0' >>/etc/quagga/ospfd.conf"
cloonix_dbssh nemo Router-1 "echo 'line vty' >>/etc/quagga/ospfd.conf"

# Router-1 zebra.conf file
cloonix_dbssh nemo Router-1 "echo 'interface eth0' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo ' ip address 192.168.1.254/24' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo ' ipv6 nd suppress-ra' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo 'interface eth1' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo ' ip address 192.168.100.1/24' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo ' ipv6 nd suppress-ra' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo 'interface eth2' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo ' ip address 192.168.101.2/24' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo ' ipv6 nd suppress-ra' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo 'interface eth3' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo ' ipv6 nd suppress-ra' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo 'interface lo' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo 'ip forwarding' >>/etc/quagga/zebra.conf"
cloonix_dbssh nemo Router-1 "echo 'line vty' >>/etc/quagga/zebra.conf"

# modify /etc/quagga/daemons file
cloonix_dbssh nemo Router-1 "sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons"
cloonix_dbssh nemo Router-1 "sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons"

# modify /etc/environment file
cloonix_dbssh nemo Router-1 "echo 'VTYSH_PAGER=more' >>/etc/environment"
 
# modify /etc/bash.bashrc file
cloonix_dbssh nemo Router-1 "echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc"


echo "completed Router-1 configuration"

#-----------------------------------------------------
# Write quagga config files on Router-2
#
# Another method is to create temporary files 
# containing the required configuration and then copy 
# them to Router-2 using the cloonix_dpscp command.
# 
# This results in a script file that is easier for 
# humans to read.
#-----------------------------------------------------

echo "starting Router-2 configuration"

mkdir /tmp/router-2

# Router-2 ospfd.conf file
cat > /tmp/router-2/ospfd.conf << EOF
interface eth0
!
interface eth1
!
interface eth2
!
interface lo
!
router ospf
 passive-interface eth0
 network 192.168.2.0/24 area 0.0.0.0
 network 192.168.100.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
!
line vty
!
EOF

# Router-2 zebra.conf file
cat > /tmp/router-2/zebra.conf << EOF
interface eth0
 ip address 192.168.2.254/24
 ipv6 nd suppress-ra
!
interface eth1
 ip address 192.168.100.2/24
 ipv6 nd suppress-ra
!
interface eth2
 ip address 192.168.102.2/24
 ipv6 nd suppress-ra
!
interface lo
!
ip forwarding
!
line vty
!
EOF

# move files to Router-2
cloonix_dbscp nemo -r /tmp/router-2/* Router-2:/etc/quagga

# modify /etc/quagga/daemons file
cloonix_dbssh nemo Router-2 "sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons"
cloonix_dbssh nemo Router-2 "sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons"

# modify /etc/environment file
cloonix_dbssh nemo Router-2 "echo 'VTYSH_PAGER=more' >>/etc/environment"
 
# modify /etc/bash.bashrc file
cloonix_dbssh nemo Router-2 "echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc"

echo "completed Router-2 configuration"

#-------------------------------------------------------
# Write quagga config files on Router-3
#
# Create a temporary file and then copy it
# to Router-3.
#-------------------------------------------------------

echo "starting Router-3 configuration"

mkdir /tmp/router-3

# Router-3 ospfd.conf file
cat > /tmp/router-3/ospfd.conf << EOF
interface eth0
!
interface eth1
!
interface eth2
!
interface lo
!
router ospf
 passive-interface eth0
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.101.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
!
line vty
!
EOF

# Router-3 zebra.conf file
cat > /tmp/router-3/zebra.conf << EOF
interface eth0
 ip address 192.168.3.254/24
 ipv6 nd suppress-ra
!
interface eth1
 ip address 192.168.101.1/24
 ipv6 nd suppress-ra
!
interface eth2
 ip address 192.168.102.1/24
 ipv6 nd suppress-ra
!
interface lo
!
ip forwarding
!
line vty
!
EOF

# move files to Router-3
cloonix_dbscp nemo -r /tmp/router-3/* Router-3:/etc/quagga

# modify /etc/quagga/daemons file
cloonix_dbssh nemo Router-3 "sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons"
cloonix_dbssh nemo Router-3 "sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons"

# modify /etc/environment file
cloonix_dbssh nemo Router-3 "echo 'VTYSH_PAGER=more' >>/etc/environment"
 
# modify /etc/bash.bashrc file
cloonix_dbssh nemo Router-3 "echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc"

echo "completed Router-3 configuration"

#-------------------------------------------------------
# Set up interfaces and default route on PC-1
#-------------------------------------------------------

echo "starting PC-1 configuration"

# interfaces file
cloonix_dbssh nemo PC-1 "echo 'auto eth0' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-1 "echo 'iface eth0 inet static' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-1 "echo '   address 192.168.1.1' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-1 "echo '   netmask 255.255.255.0' >>/etc/network/interfaces"

# rc.local file
cloonix_dbssh nemo PC-1 "sed -i '/exit 0/i ip route add 192.168.0.0/16 via 192.168.1.254 dev eth0' /etc/rc.local"

echo "completed PC-1 configuration"

#-------------------------------------------------------
# Set up interfaces and default route on PC-2
#-------------------------------------------------------

echo "starting PC-2 configuration"

# interfaces file
cloonix_dbssh nemo PC-2 "echo 'auto eth0' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-2 "echo 'iface eth0 inet static' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-2 "echo '   address 192.168.2.1' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-2 "echo '   netmask 255.255.255.0' >>/etc/network/interfaces"

# rc.local file
cloonix_dbssh nemo PC-2 "sed -i '/exit 0/i ip route add 192.168.0.0/16 via 192.168.2.254 dev eth0' /etc/rc.local"

echo "completed PC-2 configuration"

#-------------------------------------------------------
# Set up interfaces and default route on PC-3
#-------------------------------------------------------

echo "starting PC-3 configuration"

# interfaces file
cloonix_dbssh nemo PC-3 "echo 'auto eth0' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-3 "echo 'iface eth0 inet static' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-3 "echo '   address 192.168.3.1' >>/etc/network/interfaces"
cloonix_dbssh nemo PC-3 "echo '   netmask 255.255.255.0' >>/etc/network/interfaces"

# rc.local file
cloonix_dbssh nemo PC-3 "sed -i '/exit 0/i ip route add 192.168.0.0/16 via 192.168.3.254 dev eth0' /etc/rc.local"

echo "completed PC-3 configuration"

#-------------------------------------------------------
# Reboot all nodes to enable all changes
#-------------------------------------------------------

echo "rebooting nodes"
cloonix_dbssh nemo PC-1 "reboot"
echo "PC-1"
sleep 5
cloonix_dbssh nemo PC-2 "reboot"
echo "PC-2"
sleep 5
cloonix_dbssh nemo PC-3 "reboot"
echo "PC-3"
sleep 5
cloonix_dbssh nemo Router-1 "reboot"
echo "Router-1"
sleep 5
cloonix_dbssh nemo Router-2 "reboot"
echo "Router-2"
sleep 5
cloonix_dbssh nemo Router-3 "reboot"
echo "Router-3"
sleep 5
echo "Wait until nodes complete rebooting, then start your testing."

#-------------------------------------------------------
# Setup is now complete
#-------------------------------------------------------

Using the OpenDaylight SDN Controller with the Mininet Network Emulator

$
0
0

OpenDaylight (ODL) is a popular open-source SDN controller framework. To learn more about OpenDaylight, it is helpful to use it to manage an emulated network of virtual switches and virtual hosts. Most people use the Mininet network emulator to create a virtual SDN network for OpenDaylight to control.

odl-0100-b

In this post, I will show how to set up OpenDaylight to control an emulated Mininet network using OpenFlow 1.3. Because I am using virtual machines, the procedure I use will work the same in all commonly used host systems: Linux, Windows, and Mac OS X.

Using Virtual Machines

In this lab example, I will use two virtual machines. One will run the Mininet emulated network and the other will run the OpenDaylight controller. I will connect both VMs to a host-only network so they can communicate with each other and with programs running on the host computer, such as ssh and the X11 client.

I will use VirtualBox to run the Mininet VM that I downloaded from the mininet project web site, which is the easiest way to experiment with Mininet. The Mininet project team provides an Ubuntu 14.04 LTS VM image with Mininet 2.2.1, Wireshark and OpenFlow dissector tools already installed and ready to use.

I will install and run the OpenDaylight SDN controller on a new VM I create in VirtualBox.

Setting up the OpenDaylight Virtual Machine

To build the OpenDaylight virtual machine, I downloaded the Ubuntu Server ISO image from the ubuntu.com web site. Then I installed it in a new VM in VirtualBox. If you need directions on how to install an ISO disk image in a VirtualBox virtual machine, please see my post about installing Debian in a VirtualBox VM.

Give the virtual machine a descriptive name. I named the virtual machine OpenDaylight. Configure it so it uses two CPUs and 2 GB or RAM. This is the minimum configuration to support OpenDaylight. Then add a host-only network adapter to the VM.

When the VM is powered off, click on the Settings button:

The OpenDaylight virtual machine

The OpenDaylight virtual machine

In the VM’s VirtualBox network settings, enable two network interfaces. Connect the first network adapter to the NAT interface (which is the default setting) and the second network adapter to the host-only network, vboxnet0.

Connecting network adapter 2 to the host-only network

Connecting network adapter 2 to the host-only network

Configure OpenDaylight VM interfaces

By default, the VM’s first network adapter is attached to the VirtualBox NAT interface and is already configured when the VM boots up. We need to configure the second network adapter, which is attached to the VirtualBox host-only interface vboxnet0.

List all the devices using th ip command:

brian@odl:~$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:ec:a9:f1 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feec:a9f1/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 08:00:27:b0:f6:70 brd ff:ff:ff:ff:ff:ff
brian@odl:~$

Note: starting in 15.10, Ubuntu uses predictable network interface names like enp0s3 and enp0s8, instead of the classic interface names like eth0 and eth1.

We see that interface enp0s8 has no IP address. This is the second network adapter connected to vboxnet0. VirtualBox can assign an IP address on this interface using DHCP if the DCHP client requests it. So, run the following command to set up interface enp0s8:

brian@odl:~$ sudo dhclient enp0s8  

Now check the IP address assigned to enp0s8:

brian@odl:~$ ip addr show enp0s8
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:b0:f6:70 brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.101/24 brd 192.168.56.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feb0:f670/64 scope link
       valid_lft forever preferred_lft forever
brian@odl:~$

Now we see the VirtualBox DHCP server connected to the host-only network assigned the IP address 192.168.56.101 to this interface. This is the IP address we should use when connecting to any application running on the VM.

On your system the assigned IP address may be different. You may have set up the VirtualBox preferences to use a different network prefix for the host-only network, or may have configured the DHCP server to provide a diferent address range. Also, if any other VMs were started and connected to the host-only network before this VM, then the IP address assigned will be different. If the IP address is different, that’s OK. Just use the address assigned.

Now, configure the interface enp0s8 so it will remain configured after a restart. Edit the /etc/network/interfaces file:

brian@odl:~$ sudo nano /etc/network/interfaces

Add the following lines to the end of the file /etc/network/interfaces:

# the host-only network interface
auto enp0s8
iface enp0s8 inet dhcp

Connect to the OpenDaylight VM using SSH

I like to use a terminal application when working on Virtual Machines. The VirtualBox console window has too many annoying limitations. For example, I cannot cut-and-paste text from my host system onto the VirtualBox console attached to the virtual machine, or vice-versa.

Open a terminal on host computer and login using SSH:

brian@T420:~$ ssh -X brian@192.168.56.101

Now you are connected to the OpenDaylight virtual machine and can see that the host name in the prompt is changed to is odl, which I configured when installing Ubuntu on the VM.

brian@odl:~$

I also enabled X forwarding when I started SSH so I can run X programs on the OpenDaylight VM, although we won’t do that in this tutorial.

Install Java

The OpenDaylight SDN controller is a Java program so install the Java run-time environment with the following command:

$ sudo apt-get update
$ sudo apt-get install default-jre-headless

Set the JAVA_HOME environment variable. Edit the bashrc file

brian@odl:~$ nano ~/.bashrc

Add the following line to the bashrc file:

export JAVA_HOME=/usr/lib/jvm/default-java

Then run the file:

brian@odl:~$ source ~/.bashrc

Install OpenDaylight

Download the OpenDaylight software from the OpenDaylight web site. On a Linux or Mac OS host, we can use the wget command to download the tar file.

brian@odl:~$ wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.4.0-Beryllium/distribution-karaf-0.4.0-Beryllium.tar.gz

Install OpenDaylight by extracting the tar file:

brian@odl:~$ tar -xvf distribution-karaf-0.4.0-Beryllium.tar.gz

This creates a folder named distribution-karaf-0.4.0-Beryllium which contains the OpenDaylight software and plugins.

OpenDaylight is packaged in a karaf container. Karaf is a container technology that allows the developers to put all required software in a single distribution folder. This makes it easy to install or re-install OpenDaylight when needed because everything is in one folder. As we will see later, karaf also allows programs to be bundled with optional modules that can be installed when needed.

Start OpenDaylight

To run OpenDaylight, run the karaf command inside the package distribution folder.

brian@odl:~$ cd distribution-karaf-0.4.0-Beryllium
brian@odl:~$ ./bin/karaf

Now the OpenDaylight controller is running.

OpenDaylight running in a virtual machine

OpenDaylight running in a virtual machine

Install OpenDaylight features

Next, install the minimum set of features required to test OpenDaylight and the OpenDaylight GUI:

opendaylight-user@root> feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-all

The above is an example of installing optional modules in a karaf container. You only need to install an optional feature once. Once installed, these features are permanently added to the controller and will run every time it starts.

We installed the following features. Click on each feature to learn more about it:

To list all available optional features, run the command:

opendaylight-user@root> feature:list

To list all installed features, run the command:

opendaylight-user@root> feature:list --installed    

Information about OpenDaylight optional features is available on the OpenDaylight wiki.

Stop OpenDaylight

When you want to stop the controller, enter the key combination or type system:shutdown or logout at the opendaylight-user prompt.

Set up the Mininet Virtual Machine

I do not cover all the steps required to set up the Mininet VM in this post because I already covered that topic in another post: Setting up the Mininet VM.

Start the Mininet VM in the VirtualBox Manager. Now we should have two VMs running: OpenDaylight VM and Mininet VM. If we started the OpenDaylight VM first, it will have IP address 192.168.56.101 and the mininet VM will receive the second available IP address on the host-only network, 192,168.56.102. We can verify this by running the ip command on the Mininet VM console:

mininet@mininet-vm:~$ ip addr show 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:e2:98:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.56.102/24 brd 192.168.56.255 scope global eth0
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:1b:c1:07 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.16/24 brd 10.0.2.255 scope global enp0s3
       valid_lft forever preferred_lft forever
mininet@mininet-vm:~$

Note: The Mininet VM is based on Ubuntu Server 14.04, which does not yet use the predictable network interface names like enp0s3 and enp0s8, so we see interface names like eth0 and eth1.

We see eth0 is connected to the host-only interface because it has IP address 192.168.56.102 which is in the address range assigned by the VirtualBox hot-only network DHCP server. So we know we need to use IP address 192.168.56.102 to access applications running on this virtual machine.

Connect to the Mininet VM using SSH

Now open a terminal window on your host computer and SSH into the Mininet VM. Turn X forwarding on. (If you are using Windows, use Xming for an X Window System Server and Putty as an SSH client)

brian@T420:~$ ssh -X 198.168.56.102

Start Mininet

On the Mininet VM, start a simple network topology. In this case, we will do the following:

  • Set up three switches in a linear topology
  • Each switch will be connected to one host
  • The MAC address on each host will be set to a simple number
  • The remote controller, OpenDaylight, is at IP address 192.168.56.101:6633
  • We will use OpenFlow version 1.3

The Mininet command to start this is:

mininet@mininet-vm:~$ sudo mn --topo linear,3 --mac --controller=remote,ip=192.168.56.101,port=6633 --switch ovs,protocols=OpenFlow13

Test the network

Test that the OpenDaylight controller is working by pinging all nodes. Every host should be able to reach every other host:

mininet> pingall
*** Ping: testing ping reachability
h1 -> h2 h3
h2 -> h1 h3
h3 -> h1 h2
*** Results: 0% dropped (6/6 received)

The OpenDaylight Graphical User Interface

Open a browser on your host system and enter the URL of the OpenDaylight User Interface (DLUX UI). It is running on the OpenDaylight VM so the IP address is 192.168.56.102 and the port, defined by the application, is 8181:

So the URL is: http://192.168.56.101:8181/index.html.

The default username and password are both admin.

Log in to OpenDaylight controller

Log in to OpenDaylight controller

Topology

Now we see the network topology in the OpenDaylight controller’s topology tab.

Topology of the Mininet network

Topology of the Mininet network

You can see the network that is emulated by the Mininet network emulator. You may test OpenDaylight functionality by building different network topologies in Mininet with different attributes, and by using OpenDaylight to run experiments on the emulated network. For example, you may break links between switches in Mininet to test how the network responds to faults.

Nodes

Click on the Nodes tab to see information about each switch in the network:

List of nodes

List of nodes

Click on the Node Connectors link in each row to see information about each port on the switch:

Interfaces

Interfaces

Yang UI

Yang is a data modelling structure. Engineers who work with hardware routers and switches will be familiar with another data modeling structure based on SNMP, SMI, and MIB. Yang provides functionality in SDN switches that is analogous to SMI for non-SDN switches.

The OpenDaylight Yang UI is a graphical REST client for building and sending REST requests to the OpenDaylight data store. We can use the Yang UI to get information from the data store, or to build REST commands to modify information in the data store — changing network configurations.

Click on the Yang UI tab. Then click on the Expand all button to see all available APIs. Not all of them will work because we did not install all features. One API that will work is the Inventory API. Click on it, then navigate down to the nodes attribute and click on the Send button to send the GET API method to the controller.

Yang data model of the network

Yang data model of the network

Scroll down to see all the inventory information about the network: nodes, ports, statistics, etc. Click on the switches and interfaces to see the details of each.

Understanding the Yang data model and learning how to read and write to the data store is key to understanding Software Defined Networking with the OpenDaylight controller.

Capturing OpenFlow Messages

To dive deeper into how SDN controllers and switches operate, you may want to view the OpenFlow messages exchanged between the controller and switches in the network.

The Mininet VM comes with Wireshark installed, with a custom version of the OpenFlow dissector already set up.

So the easiest way to view OpenFlow messages is to start Wireshark on the Mininet VM and capture data on the interface connected to the host-only network, which is eth0 in this case.

open a new terminal window and connect to the Mininet VM using SSH with X Forwarding enabled (or use Putty and Xming if you are using Windows):

brian@T420:~$ ssh -X 192.168.56.102

Start Wireshark on the Mininet VM:

mininet@mininet-vm:~$ sudo wireshark &

You will see a warning dialog but you can ignore it. Starting Wireshark with root privileges is a security risk but, for our simple testing, we can ignore that — or you can follow the directions in the warning message to set up Wireshark in a more secure way.

Create a display filter for OpenFlow messages. Enter the text, of in the Filter window and click on Apply. Now you will see only OpenFlow messages in the Wireshark display, as shown below.

Viewing captured OpenFlow messages in Wireshark

Viewing captured OpenFlow messages in Wireshark

Shut down the project

When it is time to end the project, shut down Mininet and OpenDaylight using the following commands:

On the Mininet VM, stop Mininet and clean up the node, then shut down the VM:

mininet> exit
mininet@mininet:~$ sudo mn -c
mininet@mininet:~$ sudo shutdown -h now

On the OpenDaylight VM, stop OpenDaylight and shut down the VM:

opendaylight-user@root> system:shutdown
brian@odl:~$ sudo shutdown -h now

Both VMs should now show that they are stopped in the VirtualBox Manager application.

Conclusion

We showed how to install OpenDaylight in a virtual machine and connect it to the Mininet network emulator running on another virtual machine. We demonstrated some features of OpenDaylight and showed how to capture OpenFlow messages exchanged between the controller and the emulated switches.

OpenStack all-in-one: test cloud services in one laptop

$
0
0

To learn more about OpenStack cloud management software, a student or researcher may install OpenStack on a single machine, such as a laptop computer or a virtual machine, and emulate a small datacenter using virtual machines or containers.

openstack-logo-2

Researchers and students may choose from multiple projects that will set up OpenStack on a single machine. Some projects are community-based open-source projects and others are vendor supported projects (while still nominally open-source).

This post is an overview of links and resources to installing OpenStack on one machine.

I am just beginning to investigate OpenStack so I have not yet installed it. I’ll try some of the installers listed below and see which ones work best. For now, I provide a list along with relevant links.

DevStack

DevStack is a community-driven open-source project that provides scripts and drivers to install OpenStack on a single machine. It includes direction to install on a laptop computer and to install on a single virtual machine. Devstack may also be configured to use LXC containers as compute nodes, or to use nested KVM virtualization for compute nodes.

DevStack looks like the best option for setting up OpenStack on a typical laptop computer. It does not specify hardware requirements.

Other installers

Below I list other installers that appear to support an all-in-one system on a single node. However, these “commercial-grade” installers specify high hardware requirements.

OpenStack AutoPilot

Openstack Autopilot is the Ubuntu OpenStack installer. It is free as long as you use less than ten machines in your cloud infrastructure. So, most students and researchers will be able to play around with Autopilot for free. Autopilot requires a minimum of 12G of RAM installed in your laptop and Autopilot requires access to KVM or the install will fail.

Autopilot will set up an OpenStack cloud using LXD containers. This means that the system may run just as well in a virtual machine as it would on dedicated hardware (I have yet to try this out). The Ubuntu website offers good documentation describing how to set up OpenStack in with LXD containers on a single machine.

Red Hat RDO

RDO is a community project dedicated to using and deploying OpenStack on CentOS, Fedora, and Red Hat Enterprise Linux. It supports the PackStack OpenStack installer.

The RDO project provides documentation describing how to install OpenStack on a single node.

OpenStack on Ansible (OSA)

OpenStack on Ansible (OSA) uses the Ansible IT automation framework to deploy OpenStack. OSA offers an All-in-One (AIO) option to install OpenStack on a single machine. A developer at Rackspace wrote a blog post offering more information about how to use OSA AIO to install OpenStack on a single machine.

However, OSA currently requires some steep hardware requirements that a typical Laptop computer will not be able to offer, such as eight vCPUs. Read the OpenStack on Ansible documentation to learn more.

Mirantis Fuel

Mirantis offers an OpenStack installer called Fuel. I found some old posts about setting up all-in-one systems using Mirantis Fuel but newer releases do not seem to refer to all-in-one systems anymore, and you have to fill out a form to request a trial version of Mirantis Fuel so it seems less suitable than the options listed above for individual researchers and students.

Video summary of all the options

Here is a YouTube video describing all the different options for Automated OpenStack Deployment.

OpenStack sandboxes online

Since my laptop computer did not have the memory and CPU requirements recommended by most of the OpenStack installers mentioned above, I tried to install OpenStack on Amazon AWS. I could not install it because AWS does not offer a way to run KVM inside an Amazon AWS instance.

Instead, I found two organizations that offer on-line access to an OpenStack system that you can experiment with.

TryStack

TryStack is a cloud-based OpenStack sandbox created by Red Hat and the RDO community. The TryStack organization hosts a cluster of hardware running OpenStack that you may access to test OpenStack. It is intended to support developers but people who just want to try OpenStack may use it, too.

Ravello

Ravello Systems offers OpenStack scenarios in their repository so it is an option for those wishing to experiment with OpenStack. The nodes run on Amazon AWS, but you access them through an account at Ravello Systems.

Ravello created a virtualization layer for AWS which enables nested virtualization in Amazon AWS instances. They use this technology to create a new cloud service on Amazon AWS that allows users to emulate complex networks of nodes running any software they install or create themselves.

Ravello uses proprietary software and costs money to use but, if I was going pay to use AWS anyway, using Ravello is just a small step further. Also, the costs are low for small use-cases like mine. For example, running a small OpenStack lab will cost less than a dollar an hour.

Mininet-WiFi: SDN emulator supports WiFi networks

$
0
0

Mininet-WiFi is a fork of the Mininet SDN network emulator. The Mininet-WiFi developers extended the functionality of Mininet by adding virtualized WiFi stations and access points based on the standard Linux wireless drivers and the 80211_hwsim wireless simulation driver. They also added classes to support the addition of these wireless devices in a Mininet network scenario and to emulate the attributes of a mobile station such as position and movement relative to the access points.

mn-wifi-graph-200

The Mininet-WiFi extended the base Mininet code by adding or modifying classes and scripts. So, Mininet-WiFi adds new functionality and still supports all the normal SDN emulation capabilities of the standard Mininet network emulator.

In this post, I describe the unique functions available in the Mininet-WiFi network emulator and work through a few tutorials exploring its features.

Topics covered in this post

In this post, I present the basic functionality of Mininet-WiFi by working through a series of tutorials, each of which works through Mininet-WiFi features, while building on the knowledge presented in the previous tutorial. I suggest new users work through each tutorial in order.

I do not attempt to cover every feature in Mininet-WiFi. Once you work through the tutorials in this post, you will be well equipped to discover all the features in Mininet-WiFi by working through the Mininet-WiFi example scripts, and reading the Mininet-WiFi wiki and mailing list.

I assume the reader is already familiar with the Mininet network emulator so I cover only the new WiFi features added by Mininet-WiFi. If you are not familiar with Mininet, please read my Mininet network simulator review before proceeding. I have also written many other posts about Mininet.

I start by discussing the functionality that Mininet-WiFi adds to Mininet: Mobility functions and WiFi interfaces. Then I show how to install Mininet-WiFi and work through the tutorials listed below:

Tutorial #1: One access point shows how to run the simplest Mininet-WiFi scenario, shows how to capture wireless traffic in a Mininet-Wifi network, and discusses the issues with OpenFlow and wireless LANs.

Tutorial #2: Multiple access points shows how to create a more complex network topology so we can experiment with a very basic mobility scenario. It discusses more about OpenFlow and shows how the Mininet reference controller works in Mininet-WiFi.

Tutorial #3: Python API and scripts shows how to create more complex network topologies using the Mininet-WiFi Python API to define node positions in space and other node attributes. It also discusses how to interact with nodes running in a scenario with the Mininet-WiFi CLI, the Mininet-WiFi Python interpreter, and by running commands in a node’s shell.

Tutorial #4: Mobility shows how to create a network mobility scenario in which stations move through space and may move in and out of range of access points. It also discusses the available functions that may be used to implement different mobility models using the Mininet-WiFi Python API.

Mininet-WiFi compared to Mininet

Mininet-WiFi is an extension of the Mininet software defined network emulator. The Mininet-WiFi developer did not modify any existing Mininet functionality, but added new functionality.

Mininet-WiFi and Mobility

Broadly defined, mobility in the context of data networking refers to the ability of a network to accommodate hosts moving from one part of the network to another. For example: a cell phone user may switch to a wifi access point when she walks into a coffee shop; or a laptop user may walk from her office in one part of a building to a meeting room in another part of the building and still being able to connect to the network via the nearest WiFi access point.

While the standard Mininet network emulator may be used to test mobility1, Mininet-WiFi offers more options to emulate complex scenarios where many hosts will be changing the switches to which they are connected. Mininet-WiFi adds new classes that simplify the programming work required by researchers to create Mobility scenarios.

Mininet-WiFi does not modify the reference SDN controller provided by standard Mininet so the reference controller cannot manage the mobility of users in the wireless network. Researchers must use a remote controller that supports the CAPWAP protocol (NOTE: I’ve not tried this and I do not know if it will work without modifications or additional programming), or manually add and delete flows in the access points and switches.

802.11 Wireless LAN Emulation

Mininet-wifi incorporates the Linux 802.11 SoftMAC wireless drivers, the cfg80211 wireless configuration interface and the mac80211_hwsim wireless simulation drivers in its access points.

The mac80211_hwsim driver is a software simulator for Wi-Fi radios. It can be used to create virtual wi-fi interfaces that use the 802.11 SoftMAC wireless LAN driver. Using this tool, researchers may emulate a Wi-Fi link between virtual machines2. The 80211_hwsim driver enables researchers to emulate the wifi protocol control messages passing between virtual wireless access points and virtual mobile stations in a network emulation scenario. By default, 80211_hwsim simulates perfect conditions, which means there is no packet loss or corruption.

You can use Wireshark to monitor wireless traffic passing between the virtual wireless access point and the virtual mobile stations in the Mininet-wifi network scenarios. But, you will find it is difficult to capture wireless control traffic on standard WLAN interfaces like ap1-wlan0 because The Linux kernel strips wireless control messages and headers before making traffic on these interfaces available to user processes like Wireshark. You will have to install additional tools and follow a complex procedure to enable monitoring of WiFi traffic on the ap1-wlan0 interface. An easier method is available: look for the hwsim0 interface on an access point, enable it, and monitor traffic on it. The hwsim0 interface replays communications sent onto the access point’s simulated wireless interface(s) such as ap1-wlan0 without stripping any 802.11 headers or control traffic3. We’ll see this in the examples we work through, below.

Mininet-WiFi display graph

Since locations of nodes in space is an important aspect of WiFi networks, Mininet WiFi provides a graphical display showing locations of WiFi nodes in a graph. The graph may be created by calling its method in the Mininet-WiFi Python API (see examples in the tutorials below).

The graph will show wireless access points and stations, their positions in space and will display the affects of the range parameter for each node. The graph will not show any “wired” network elements such as standard Mininet hosts or switches, Ethernet connections between access points, hosts, or switches.

Install Mininet-WiFi on a Virtual Machine

First, we need to create a virtual machine that will run the Mininet-WiFi network emulator.

It the example below, we will use the VirtualBox virtual machine manager because it is open-source and runs on Windows, Mac OS, and Linux.

Set up a new Ubuntu Server VM

Install Ubuntu Server in a new VM. Download an Ubuntu Server ISO image from the Ubuntu web site. See my post about installing Debian Linux in a VM. Follow the same steps to install Ubuntu.

In this example, we will name the VM Mininet-WiFi.

Set up the Mininet-WiFi VM

To ensure that the VM can display X applications such as Wireshark on your host computer’s desktop, read through my post about setting up the standard Mininet VM and set up the host-only network adapter, the X windows server, and your SSH software.

Now you can connect to the VM via SSH with X Forwarding enabled. In the example below, my host computer is t420 and the Mininet WiFi VM is named wifi. And, in this case the userid on the Mininet-WiFi VM is brian.

t420:~$ ssh -X brian@192.168.52.101
wifi:~$

Install Mininet-WiFi

In the Mininet-WiFi VM, install a few other tools and then download and compile Mininet-WiFi. The Mininet-WiFi developers created a helpful install script so the process is automatic.

wifi:~$ sudo apt-get update
wifi:~$ sudo apt-get install git make
wifi:~$ git clone https://github.com/intrig-unicamp/mininet-wifi
wifi:~$ cd mininet-wifi

Mininet WiFi is installed by a script. Run the script with the -h help option to see all the options available.

wifi:~$ util/install.sh -h

In my case, I chose to install Mininet-WiFi with the following options:

  • W: install Mininet-WiFi dependencies
  • n: install Mininet dependencies + core files
  • f: install OpenFlow
  • 3: install OpenFlow 1.3
  • v: install Open Vswitch
  • p: install POX OpenFlow Controller
  • w: install Wireshark

So I ran the install script as follows:

wifi:~$ sudo util/install.sh -Wnf3vpw

Mininet-WiFi Tutorial #1: One access point

The simplest network is the default topology, which consists of a wireless access point with two wireless stations. The access point is a switch connected to a controller. The stations are hosts.

This simple lab will allow us to demonstrate how to capture wireless control traffic and will demonstrate the way an OpenFlow-enabled access point handles WiFi traffic on the wlan interface.

Capturing Wireless control traffic in Mininet-WiFi

To view wireless control traffic we must first start Wireshark:

wifi:~$ wireshark &

Then, start Mininet-WiFi with the default network scenario using the command below:

wifi:~$ sudo mn --wifi

Next, enable the hwsim0 interface. The hwsim0 interface is the software interface created by Mininet-WiFi that copies all wireless traffic to all the virtual wireless interfaces in the network scenario. It is the easiest way to monitor the wireless packets in Mininet-WiFi.

mininet-wifi> sh ifconfig hwsim0 up

Now, in Wireshark, refresh the interfaces and then start capturing packets on the hwsim0 interface.

Start capture on hwsim0 interface

Start capture on hwsim0 interface

You should see wireless control traffic. Next, tun a ping command:

mininet-wifi> sta1 ping sta2

In Wireshark, see the wireless frames and the ICMP packets encapsulated in Wireless frames passing through the hwsim0 interface.

Wireshark capturing WiFi control traffic

Wireshark capturing WiFi control traffic

Stop the ping command by pressing Ctrl-C. In this default setup, any flows created in the access point (that’s if they’re created — see below for more on this issue) will expire in 60 seconds.

Wireless Access Points and OpenFlow

In this simple scenario, the access point has only one interface, ap1-wlan0. By default, stations associated with an access point connect in infrastructure mode so wireless traffic between stations must pass through the access point. If the access point works similarly to a switch in standard Mininet, we expect to see OpenFlow messages exchanged between the access point and the controller whenever the access point sees traffic for which it does not already have flows established.

To view OpenFlow packets, stop the Wireshark capture and switch to the loopback interface. Start capturing again on the loopback interface. Use the OpenFlow_1.0 filter to view only OpenFlow messages.

Then, start some traffic running with the ping command and look at the OpenFlow messages captured in Wireshark.

mininet-wifi> sta1 ping sta2    

I was expecting that the first ICMP packet generated by the ping command should be flooded to the controller, and the controller would set up a flows on the access point so the two stations could exchange packets. Instead, I found that the two stations were able to exchange packets immediately and the access point did not flood the ICMP packets to the controller. Only an ARP packet, which is in a broadcast frame, gets flooded to the controller and is ignored.

No OpenFlow messages passing to the controller

No OpenFlow messages passing to the controller

Check to see if flows have been created in the access point:

mininet-wifi> dpctl dump-flows
*** ap1 ------------------------------------------
NXST_FLOW reply (xid=0x4):

We see that no flows have been created on the access point. How do the two access points communicate with each other?

I do not know the answer but I have an idea. My research indicates that OpenFlow-enabled switches (using OpenFlow 1.0 or 1.3) will reject “hairpin connections”, which are flows that cause traffic to be sent out the same port in which it was received. A wireless access point, by design, receives and sends packets on the same wireless interface. Stations connected to the same wireless access point would require a “hairpin connection” on the access point to communicate with each other. I surmise that, to handle this issue, Linux treats the WLAN interface in each access point like the radio network sta1-ap1-sta2 as if it is a “hub”, where ap1-wlan0 provides the “hub” functionality for data passing between sta1 and sta2. ap1-wlan0 switches packets in the wireless domain and will not bring a packet into the “Ethernet switch” part of access point ap1 unless it must be switched to another interface on ap1 other than back out ap1-wlan0.

Stop the tutorial

Stop the Mininet ping command by pressing Ctrl-C.

In the Wireshark window, stop capturing and quit Wireshark.

Stop Mininet-Wifi and clean up the system with the following commands:

mininet-wifi> exit
wifi:~$ sudo mn -c

Mininet-WiFi Tutorial #2: Multiple access points

When we create a network scenario with two or more wireless access points, we can show more of the functions available in Mininet-WiFi.

In this tutorial, we will create a linear topology with three access points, where one station is connected to each access point. Remember, you need to already know basic Mininet commands to appreciate how we create topologies using the Mininet command line.

Run Mininet-Wifi and create a linear topology with three access points:

wifi:~$ sudo mn --wifi --topo linear,3

From the output of the command, we can see how the network is set up and which stations are associated with which access points.

*** Creating network
*** Adding controller
*** Adding hosts and stations:
sta1 sta2 sta3
*** Adding switches and access point(s):
ap1 ap2 ap3
*** Adding links and associating station(s):
(ap2, ap1) (ap3, ap2) (sta1, ap1) (sta2, ap2) (sta3, ap3)
*** Starting controller(s)
c0
*** Starting switches and access points
ap1 ap2 ap3 ...
*** Starting CLI:
mininet-wifi>

We can also verify the configuration using the Mininet CLI commands net and dump.

For example, we can run the net command to see the connections between nodes:

mininet-wifi> net
sta1 sta1-wlan0:None
sta2 sta2-wlan0:None
sta3 sta3-wlan0:None
ap1 lo:  ap1-eth1:ap2-eth1
ap2 lo:  ap2-eth1:ap1-eth1 ap2-eth2:ap3-eth1
ap3 lo:  ap3-eth1:ap2-eth2
c0

From the net command above, we see that ap1, ap2, and ap3 are connected together in a linear fashion by Ethernet links. But, we do not see any information about to which access point each station is connect. This is because they are connected over a “radio” interface so we need to run the iw command at each station to observe to which access point each is associated.

To check which access points are “visible” to each station, use the iw scan command:

mininet-wifi> sta1 iw dev sta1-wlan0 scan | grep ssid
        SSID: ssid_ap1
        SSID: ssid_ap2
        SSID: ssid_ap3

Verify the access point to which each station is currently connected with the iw link command. For example, to see the access point to which station sta1 is connected, use the following command:

mininet-wifi> sta1 iw dev sta1-wlan0 link
Connected to 02:00:00:00:03:00 (on sta1-wlan0)
        SSID: ssid_ap1
        freq: 2412
        RX: 1853238 bytes (33672 packets)
        TX: 7871 bytes (174 packets)
        signal: -30 dBm
        tx bitrate: 54.0 MBit/s

        bss flags:      short-slot-time
        dtim period:    2
        beacon int:     100
mininet-wifi>

A simple mobility scenario

In this example, each station is connected to a different wireless access point. We can use the iw command to change which access point to which each station is connected.

Note: The iw commands may be used in static scenarios like this but should not be used when Mininet-WiFi automatically assigns associations in more realistic mobility scenarios. We’ll discuss how Mininet-WiFi handles real mobility and how to use iw commands with Mininet-WiFi later in this post.

Let’s decide we want sta1, which is currently associated with ap1, to change its association to ap2. Manually switch the sta1 association from ap1 (which is ssid_ap1) to ap2 (which is ssid_ap2) using the following commands:

mininet-wifi> sta1 iw dev sta1-wlan0 disconnect
mininet-wifi> sta1 iw dev sta1-wlan0 connect ssid_ap2

Verify the change with the iw link command:

mininet-wifi> sta1 iw dev sta1-wlan0 link
Connected to 02:00:00:00:04:00 (on sta1-wlan0)
        SSID: ssid_ap2
        freq: 2412
        RX: 112 bytes (4 packets)
        TX: 103 bytes (2 packets)
        signal: -30 dBm
        tx bitrate: 1.0 MBit/s

        bss flags:      short-slot-time
        dtim period:    2
        beacon int:     100
mininet-wifi>

We see that sta1 is now associated with ap2.

So we’ve demonstrated a basic way to make stations mobile, where they switch their association from one access point to another.

OpenFlow flows in a mobility scenario

Now let’s see how the Mininet reference controller handles this simple mobility scenario.

We need to get some traffic running from sta1 to sta3 in a way that allows us to access the Mininet-WiFi command line. We’ll run the ping command in an xterm window on sta3.

First, check the IP addresses on sta1 and sta3 so we know which parameters to use in our test. The easiest way to see all IP addresses is to run the dump command:

mininet-wifi> dump
<Host sta1: sta1-wlan0:10.0.0.1 pid=7091>
<Host sta2: sta2-wlan0:10.0.0.2 pid=7094>
<Host sta3: sta3-wlan0:10.0.0.3 pid=7097>
<OVSSwitch ap1: lo:127.0.0.1,ap1-eth1:None pid=7106>
<OVSSwitch ap2: lo:127.0.0.1,ap2-eth1:None,ap2-eth2:None pid=7110>
<OVSSwitch ap3: lo:127.0.0.1,ap3-eth1:None pid=7114>
<Controller c0: 127.0.0.1:6633 pid=7080>
mininet-wifi>    

So we see that sta1 has IP address 10.0.0.1 and sta3 has IP address 10.0.0.3.

Next, start an xterm window on sta3:

mininet-wifi> xterm sta3

This opens an xterm window from sta3.

xterm window on sta3

xterm window on sta3

In that window, run the following command to send ICMP messages from sta3 to sta1:

root@mininet-wifi:~# ping 10.0.0.1

Since these packets will be forwarded by the associated access points out a port other then the port on which the packets were received, the access points will operate like normal OpenFlow-enabled switches. Each access point will forward the first ping packet it receives in each direction to the Mininet reference controller. The controller will set up flows on the access points to establish a connection between the stations sta1 and sta3.

If we run Wireshark and enable packet capture on the Loopback interface, then filter using with of (for Ubuntu 14.04) or openflow_v1 (for Ubuntu 15.10 and later), we will see OpenFlow messages passing to and from the controller.

Wireshark capturing OpenFlow messages

Wireshark capturing OpenFlow messages

Now, in the Mininet CLI, check the flows on each switch with the dpctl dump-flows command.

mininet-wifi> dpctl dump-flows
*** ap1 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
*** ap2 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
idle_timeout=60, idle_age=0, priority=65535,arp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,arp_spa=10.0.0.3,arp_tpa=10.0.0.1,arp_op=2 actions=output:3
 cookie=0x0, duration=1068.17s, table=0, n_packets=35, n_bytes=1470, idle_timeout=60, idle_age=0, priority=65535,arp,in_port=3,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,arp_spa=10.0.0.1,arp_tpa=10.0.0.3,arp_op=1 actions=output:2
 cookie=0x0, duration=1073.174s, table=0, n_packets=1073, n_bytes=105154, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=3,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,nw_src=10.0.0.1,nw_dst=10.0.0.3,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:2
 cookie=0x0, duration=1073.175s, table=0, n_packets=1073, n_bytes=105154, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:3
*** ap3 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=1068.176s, table=0, n_packets=35, n_bytes=1470, idle_timeout=60, idle_age=0, priority=65535,arp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,arp_spa=10.0.0.3,arp_tpa=10.0.0.1,arp_op=2 actions=output:1
idle_timeout=60, idle_age=0, priority=65535,arp,in_port=1,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,arp_spa=10.0.0.1,arp_tpa=10.0.0.3,arp_op=1 actions=output:2
 cookie=0x0, duration=1073.182s, table=0, n_packets=1073, n_bytes=105154, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=1,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,nw_src=10.0.0.1,nw_dst=10.0.0.3,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:2
 cookie=0x0, duration=1073.185s, table=0, n_packets=1073, n_bytes=105154, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:1
mininet-wifi>

We see flows set up on ap2 and ap3, but not on ap1. This is because sta1 is connected to ap2 and sta3 is connected to ap3 so all traffic is passing through only ap2 and ap3.

What will happen if sta1 moves back to ap1? Move sta1 back to access point ap1 with the following commands:

mininet-wifi> sta1 iw dev sta1-wlan0 disconnect
mininet-wifi> sta1 iw dev sta1-wlan0 connect ssid_ap1

The ping command running on sta3 stops working. We see no more pings completed.

In this case, access points ap2 and ap3 already have flows for ICMP messages coming from sta3 so they just keep sending packets towards the ap2-wlan0 interface to reach where they think sta1 is connected. Since ping messages never get to sta1 in its new location, the access point ap1 never sees any ICMP traffic so does not request any flow updates from the controller.

Check the flow tables in the access points again:

mininet-wifi> dpctl dump-flows
*** ap1 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=40.959s, table=0, n_packets=1, n_bytes=42, idle_timeout=60, idle_age=40, priority=65535,arp,in_port=1,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,arp_spa=10.0.0.3,arp_tpa=10.0.0.1,arp_op=1 actions=output:2
 cookie=0x0, duration=40.958s, table=0, n_packets=1, n_bytes=42, idle_timeout=60, idle_age=40, priority=65535,arp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,arp_spa=10.0.0.1,arp_tpa=10.0.0.3,arp_op=2 actions=output:1
*** ap2 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=40.968s, table=0, n_packets=1, n_bytes=42, idle_timeout=60, idle_age=40, priority=65535,arp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,arp_spa=10.0.0.3,arp_tpa=10.0.0.1,arp_op=1 actions=output:1
 cookie=0x0, duration=40.964s, table=0, n_packets=1, n_bytes=42, idle_timeout=60, idle_age=40, priority=65535,arp,in_port=1,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,arp_spa=10.0.0.1,arp_tpa=10.0.0.3,arp_op=2 actions=output:2
 cookie=0x0, duration=1214.279s, table=0, n_packets=1214, n_bytes=118972, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:3
*** ap3 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=40.978s, table=0, n_packets=1, n_bytes=42, idle_timeout=60, idle_age=40, priority=65535,arp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,arp_spa=10.0.0.3,arp_tpa=10.0.0.1,arp_op=1 actions=output:1
 cookie=0x0, duration=40.971s, table=0, n_packets=1, n_bytes=42, idle_timeout=60, idle_age=40, priority=65535,arp,in_port=1,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,arp_spa=10.0.0.1,arp_tpa=10.0.0.3,arp_op=2 actions=output:2
 cookie=0x0, duration=1214.288s, table=0, n_packets=1214, n_bytes=118972, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:1
mininet-wifi>

The controller sees some LLC messages from sta1 but does recognize that sta1 has moved to a new access point, so it does nothing. Since the controller does not modify any flows in the access points, none of the ICMP packets still being generated by sta3 will reach sta1 so it cannot reply. This situation will remain as long as the access points ap2 and ap3 continue to see ICMP packets from sta3, which keeps the old flow information alive in their flow tables.

One “brute force” way to resolve this situation is to delete the flows on the switches. In this simple example, it’s easier to just delete all flows.

Delete the flows in the access points using the command below:

mininet-wifi> dpctl del-flows

Now the ping command running in the xterm window on sta3 should show that pings are being completed again.

Once all flows were deleted, ICMP messages received by the access points do not match any existing flows so the access points communicate with the controller to set up new flows. If we dump the flows we see that the ICMP packets passing between sta3 and sta1 are now traversing across all three acces points.

mininet-wifi> dpctl dump-flows
*** ap1 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=10.41s, table=0, n_packets=11, n_bytes=1078, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,nw_src=10.0.0.1,nw_dst=10.0.0.3,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:1
 cookie=0x0, duration=9.41s, table=0, n_packets=10, n_bytes=980, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=1,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:2
*** ap2 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=10.414s, table=0, n_packets=11, n_bytes=1078, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=1,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,nw_src=10.0.0.1,nw_dst=10.0.0.3,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:2
 cookie=0x0, duration=9.417s, table=0, n_packets=10, n_bytes=980, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:1
*** ap3 -----------------------------------------------
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=10.421s, table=0, n_packets=11, n_bytes=1078, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=1,vlan_tci=0x0000,dl_src=02:00:00:00:00:00,dl_dst=02:00:00:00:02:00,nw_src=10.0.0.1,nw_dst=10.0.0.3,nw_tos=0,icmp_type=0,icmp_code=0 actions=output:2
 cookie=0x0, duration=9.427s, table=0, n_packets=10, n_bytes=980, idle_timeout=60, idle_age=0, priority=65535,icmp,in_port=2,vlan_tci=0x0000,dl_src=02:00:00:00:02:00,dl_dst=02:00:00:00:00:00,nw_src=10.0.0.3,nw_dst=10.0.0.1,nw_tos=0,icmp_type=8,icmp_code=0 actions=output:1
mininet-wifi>

We have shown how the Mininet reference controller works in Mininet-WiFi. The Mininet reference controller does not have the ability to detect when a station moves from one access point to another. When this happens, we must delete the existing flows so that new flows can be created. We will need to us a more advanced remote controller, such as OpenDaylight, to enable station mobility but that is a topic outside the scope of this post.

Stop the tutorial

Stop the Mininet ping command by pressing Ctrl-C.

In the Wireshark window, stop capturing and quit Wireshark.

Stop Mininet-Wifi and clean up the system with the following commands:

mininet-wifi> exit
wifi:~$ sudo mn -c

Mininet-WiFi Tutorial #3: Python API and scripts

Mininet provides a Python API so users can create simple Python scripts that will set up custom topologies. Mininet-WiFi extends this API to support a wireless environment.

When you use the normal Mininet mn command with the –wifi option to create Mininet-WiFi topologies, you do not have access to most of the extended functionality provided in Mininet-WiFi. To access features that allow you to emulate the behavior of nodes in a wireless LAN, you need to use the Mininet-Wifi extensions to the Mininet Python API.

The Mininet-WiFi Python API

The Mininet-WiFi developers added new classes to Mininet to support emulation of nodes in a wireless environment. Mininet-WiFi adds addStation and addBaseStation methods, and a modified addLink method to define the wireless environment.

If you are just beginning to write scripts for Mininet-WiFi, you can use the example scripts as a starting point. The Mininet-WiFi developers created example scripts that show how to use most of the features in Mininet-WiFi. In all of the tutorials I show below, I started with an example script and modified it.

Mininet-Wifi example scripts are in the ~/mininet-wifi/examples directory.

Basic station and access point methods

In a simple scenario, you may add a station and an access point with the following methods in a Mininet-WiFi Python script:

Add a new station named sta1, with all parameters set to default values:

net.addStation( 'sta1' )

Add a new access point named ap1, with SSID ap1-ssid, and all other parameters set to default values:

net.addBaseStation( 'ap1',  ssid='new_ssid' )

Add a wireless association between station and access point, with default values for link attributes:

net.addLink( ap1, sta1 )

For more complex scenarios, more parameters are available for each method. You may specify the MAC address, IP address, location in three dimensional space, radio range, and more. For example, the following code defines an access point and a station, and creates an association (a wireless connection) between the two nodes and applies some traffic control parameters to the connection to make it more like a realistic radio environment, adding badwidth restrictions, an error rate, and a propagation delay:

Add a station and specify the wireless encryption method, the station MAC address, IP address, and position in virtual space:

net.addStation( 'sta1', passwd='123456789a', encrypt='wpa2', mac='00:00:00:00:00:02', ip='10.0.0.2/8', position='50,30,0' ) 

Add an access point and specify the wireless encryption method, SSID, wireless mode, channel, position, and radio range:

net.addBaseStation( 'ap1', passwd='123456789a', encrypt='wpa2', ssid= 'ap1-ssid', mode= 'g', channel= '1', position='30,30,0', range=30 )

Add a wireless association between a station and an access point and specifiy link properties of maximum bandwidth, error rate, and delay:

net.addLink( ap1, sta1, bw='11Mbps', loss='0.1%', delay='15ms' )

To activate association control in a static network, you may use the associationControl method, which makes Mininet-WiFi automatically choose which access point a base station will connect to based on the range between stations and access points. For example, use the following method to use the strongest signal first when determining connections between station and access points:

net.associationControl( 'ssf' )
Classic Mininet API

The Mininet WiFi Python API still supports the standard Mininet node types — switches, hosts, and controllers. For example:

Add a host. Note that the station discussed above is a type of host nodem with a wireless interface instead of an Ehternet interface.

net.addHost( 'h1' )

Add a switch. Note that the access point discussed above is a type of switch that has one wireless interface (wlan0) and any number of Ethernet interfaces (up to the maximum supported by your installed version of Open vSwitch).

net.addSwitch( 's1' )

Add an Ethernet link between two nodes. Note that if you use addLink to connect two access points together (and are using the default Infrastructure mode), Mininet-WiFi creates an Ethernet link between them.

net.addLink( s1, h1 )

Add a controller:

net.addController( 'c0' )

Using the Python API, you may build a topology that includes hosts, switches, stations, access points, and multiple controllers.

Mininet-WiFi network with node positions

In the example below, I created a Python program that will set up two stations connected to two access points, and set node positions and radio range so that we can see how these properties affect the emulated network. I used the Mininet-WiFi example script 2AccessPoints.py as the base for the script shown below, then I added the position information to each node and enabled association control.

#!/usr/bin/python

from mininet.net import Mininet
from mininet.node import Controller,OVSKernelSwitch
from mininet.link import TCLink
from mininet.cli import CLI
from mininet.log import setLogLevel

def topology():

    net = Mininet( controller=Controller, link=TCLink, switch=OVSKernelSwitch )

    print "*** Creating nodes"
    ap1 = net.addBaseStation( 'ap1', ssid= 'ssid-ap1', mode= 'g', channel= '1', position='10,30,0', range='20' )
    ap2 = net.addBaseStation( 'ap2', ssid= 'ssid-ap2', mode= 'g', channel= '6', position='50,30,0', range='20' )
    sta1 = net.addStation( 'sta1', mac='00:00:00:00:00:01', ip='10.0.0.1/8', position='10,20,0' )
    sta2 = net.addStation( 'sta2', mac='00:00:00:00:00:02', ip='10.0.0.2/8', position='50,20,0' )
    c1 = net.addController( 'c1', controller=Controller )

    """plot graph"""
    net.plotGraph(max_x=60, max_y=60)

    # Comment out the following two lines to disable AP
    print "*** Enabling association control (AP)"
    net.associationControl( 'ssf' )        

    print "*** Creating links and associations"
    net.addLink( ap1, ap2 )
    net.addLink( ap1, sta1 )
    net.addLink( ap2, sta2 )

    print "*** Starting network"
    net.build()
    c1.start()
    ap1.start( [c1] )
    ap2.start( [c1] )

    print "*** Running CLI"
    CLI( net )

    print "*** Stopping network"
    net.stop()

if __name__ == '__main__':
    setLogLevel( 'info' )
    topology()

I saved the file with the name position-test.py and made it executable.

Working with Mininet-WiFi during runtime

Mininet-WiFi python scripts may be run from the command line by running the script directly, or by calling it as part of a Python command. The only difference is how the path is stated. For example:

wifi:~/scripts $ sudo ./position-test.py

or,

wifi:~$ sudo python position-test.py

The position-test.py script will set open the Mininet-WiFi graph window and show the locations of each wireless node in space, and the range attribute of each node.

The position-test.py script running

The position-test.py script running

While the scenario is running, we can query information about the network from either the Mininet-WiFi command line or from the Python interpreter and we can log into running nodes to gather information or make configuration changes.

Mininet-WiFi CLI

The Python script position-test.py places nodes in specific positions. When the scenario is running, we can use the Mininet-WiFi command line interface (CLI) commands to can check the geometric relationship between nodes in space, and information about each node.

Position

The position CLI command outputs the location of a node in virtual space as measured by three values, one for each of the vertices X, Y, and Z.

Suppose we want to know the position of the access point ap1 in the network scenario’s virtual space. We may use the position CLI command to view a node’s position:

mininet-wifi> position ap1
----------------
Position of ap1
----------------
Position X: 10.00
Position Y: 30.00
Position Z: 0.00

We may also check the position of the station sta2:

mininet-wifi> position sta2
----------------
Position of sta2
----------------
Position X: 50.00
Position Y: 20.00
Position Z: 0.00
Mininet-WiFi CLI: Distance

The distance CLI command tells us the distance between two nodes.

For example, we may check how far apart access point ap1 and station sta2 are from each other using the distance CLI command:

mininet-wifi> distance ap1 sta2        
The distance between ap1 and sta2 is 41.23 meters
Info

The info CLI command prints information about each node running in the scenario.

For example, to see information about access point ap1, enter the CLI command:

mininet-wifi> info ap1
Tx-Power: [20] dBm
SSID: ssid-ap1
Number of Associated Stations: 1

To see information about station sta1, enter the CLI command:

mininet-wifi> info sta1
--------------------------------
Interface: sta1-wlan0
Associated To: ap1
Frequency: 2.412 GHz
Signal level: -40.10 dbm
Tx-Power: 20 dBm
Mininet-WiFi Python runtime interpreter

In addition to the CLI, Mininet-WiFi supports running Python code directly at the command line using the py command. Simple, Python functions may be called to get additional information about the network, or to make simple changes while the scenario is running.

The full range of useful Python functions is not documented but you can read the source code to see functions that may be useful. The examples below are in the source code file, ~/Mininet-WiFi/mininet/net.py, starting around line 780.

Getting network information

The examples I show below are useful for gathering information about stations and access points.

To see the range of an access point or station, call the range function. Call it using the name of the node followed by the function as shown below for access point ap1:

mininet-wifi> py ap1.range
20

To see which station is associated with an access point (in this example ap1), or the number of stations associated with an access point, call the associatedStations and nAssociatedStations functions:

mininet-wifi> py ap1.associatedStations
[<Host sta1: sta1-wlan0:10.0.0.1 pid=3845> ]
mininet-wifi> py ap1.nAssociatedStations
1

To see which access point is associated with a station (in this example sta1) call the associatedAp function:

mininet-wifi> py sta1.associatedAp
[<OVSSwitch ap1: lo:127.0.0.1,ap1-eth1:None pid=3862> ]

You may also query the received signal strength indicator (rssi), transmitted power (txpower), service set indicator (ssid), channel, and frequency of each wireless node using the Python interpreter.

As we can see, the output of Python functions is formatted as strings and numbers that may sometimes be hard to read. This is because these functions are built to support the program, not to be read by humans. However, if you which functions are available to be called at the Mininet-WiFi command line you will be able to get information you cannot get through the standard Mininet-WiFi CLI.

Changing the network during runtime

Mininet-WiFi provides Python functions that can be used during runtime to make changes to node positions and associations. These functions are useful when we have a static setup and want to make arbitrary changes on demand. This makes it possible to do testing or demonstrations with carefully controlled scenarios.

To change the access point to which a station is associated (provided the access point is within range):

sta1.moveAssociationTo('sta1-wlan0', 'ap1') 

To move a station or access point in space to another coordinate position:

sta1.moveStationTo('40,20,40')


To change the range of a station or access point:

sta1.setRange(100)

The commands above will all impact which access points and which stations associate with each other. The behavior of the network will be different depending on whether association control is enabled or disabled in the position-test.py script.

Running commands in nodes

When running a scenario, users may make configuration changes on nodes to implement some additional functionality. This can be done from the Mininet-WiFi command line by sending commands to the node’s command shell. Start the command with the name of the node followed by a space, then enter the command to run on that node.

For example, to see information about the WLAN interface on a station named sta1, run the command:

mininet-wifi> sta1 iw dev sta1-wlan0 link

Another way to run commands on nodes is to open an xterm window on that node and enter commands in the xterm window. For example, to open an xterm window on station sta1, run the command:

mininet-wifi> xterm sta1

Running commands on nodes is standard Mininet feature but it is also an advanced topic. See the Mininet documentation for more details. You can run simple commands such as ping or iwconfig but more advance commands may require you to mount private directories for configuration or log files.

Mininet-WiFi and shell commands

Mininet-WiFi manages the affect of range using code that calculates the ability of each node to connect with other nodes. However, Mininet-WiFi does not change the way networking works at the operating system level. So iw commands executed on nodes will override Mininet-WiFi and do not gather information generated by Mininet-WiFi about the network.

I suggest you do not rely on iw commands. For example, the iw scan command will still show that sta1 can detect the SSIDs of all access points, even the access point ap2 which should be out of range. The iw link command will show the same signal strength regardless of how far the station is from the access point, while the Mininet-WiFi info command will show the calculated signal strength based on the propagation model and distance between nodes.

For example, the iw command run on sta1 shows received signal strength is -30 dBm. This never changes no matter how far the station is from the access point.

mininet-wifi> sta1 iw dev sta1-wlan0 link
Connected to 02:00:00:00:00:00 (on sta1-wlan0)
        SSID: ssid-ap1
        freq: 2412
        RX: 164628 bytes (2993 packets)
        TX: 775 bytes (10 packets)
        signal: -30 dBm
        tx bitrate: 6.0 MBit/s

        bss flags:      short-slot-time
        dtim period:    2
        beacon int:     100

The info command shows Mininet-WiFi’s calculated signal strength received by the station is -43.11 dBm. This value will change if you reposition the station.

mininet-wifi> info sta1
--------------------------------
Interface: sta1-wlan0
Associated To: ap1
Frequency: 2.412 GHz
Signal level: -43.11 dbm
Tx-Power: 20 dBm

When working with Mininet-WiFi during runtime, use the built-in Mininet-WiFi commands or use the Python functions to check the wireless attributes of nodes.

Stop the tutorial

Stop Mininet-Wifi and clean up the system with the following commands:

mininet-wifi> exit
wifi:~$ sudo mn -c

Mininet-WiFi Tutorial #4: Mobility

The more interesting features provided by Mininet-WiFi support mobile stations moving around in virtual space. Mininet-Wifi provides new methods in its Python API, such as startMobility and Mobility, with which we may specify a wide variety of wireless LAN scenarios by controlling station movement, access point range, radio propagation models, and more.

In this tutorial, we will create a scenario where one station moves about in space, and where it changes which access point it connects to, based on which access point is the closest.

Python API and mobility

The Mininet-WiFi Python API adds new methods that allow the user to create stations that move around in virtual space when an emulation scenario is running.

To move a station in a straight line, use the net.StartMobility and net.mobility methods. See the example script wifiMobilty.py. For example, to move a station from one position to another over a period of 60 seconds, add the following lines to your script:

net.startMobility( startTime=0 )
net.mobility( 'sta1', 'start', time=1, position='10,20,0' )
net.mobility( 'sta1', 'stop', time=59, position='30,50,0' )
net.stopMobility( stopTime=60 )

Mininet-WiFi can also automatically move stations around based on predefined mobility models. See the example script wifiMobilityModel.py. Available mobility models are: RandomWalk, TruncatedLevyWalk, RandomDirection, RandomWayPoint, GaussMarkov, ReferencePoint, and TimeVariantCommunity. For example, to move a station around in an area 60 meters by 60 meters with a minimum velocity of 0.1 meters per second and a maximum velocity of 0.2 meters per second, add the following line to your script:

net.startMobility(startTime=0, model='RandomDirection', max_x=60, max_y=60, min_v=0.1, max_v=0.2)

Mininet-WiFi will automatically connect and disconnect stations to and from access points based on either calculated signal strength or load level. See the example script wifiAssociationControl.py. To use association control, add the AC parameter to the net.startMobility call. For example, to switch access points based on the “least loaded first” criteria, add the following line to your script:

net.startMobility(startTime=0, model='RandomWayPoint', max_x=140, max_y=140, min_v=0.7, max_v=0.9, AC='llf')

The valid values for the AC parameter are:

  • llf (Least-Loaded-First)
  • ssf (Strongest-Signal-First)

When creating a scenario where stations will be mobile, we may set the range of the access points. In an example where we use “strongest signal first” as the Association Control method, the range of each access point will determine where handoffs occur between access points and which stations may connect to which access points. If you do not define the range, Mininet-WiFi assigns a default value.

Mininet-WiFi supports more methods than mentioned above. See the example scripts (mentioned further below) for examples of using other methods.

Moving a station in virtual space

A simple way to demonstrate how Mininet-WiFi implements scenarios with mobile stations that hand off between access points is to create a script that moves one station across a path that passes by three access points.

The example below will create three access points — ap1, ap2, and ap3 — arranged in a line at differing distances from each other. It also creates a host h1 to serve as a test server and a mobile station sta1 and moves sta1 across space past all three access points.

#!/usr/bin/python

from mininet.net import Mininet
from mininet.node import Controller,OVSKernelSwitch
from mininet.link import TCLink
from mininet.cli import CLI
from mininet.log import setLogLevel

def topology():

    net = Mininet( controller=Controller, link=TCLink, switch=OVSKernelSwitch )

    print "*** Creating nodes"
    h1 = net.addHost( 'h1', mac='00:00:00:00:00:01', ip='10.0.0.1/8' )
    sta1 = net.addStation( 'sta1', mac='00:00:00:00:00:02', ip='10.0.0.2/8', range='20' )
    ap1 = net.addBaseStation( 'ap1', ssid= 'ap1-ssid', mode= 'g', channel= '1', position='30,50,0', range='30' )
    ap2 = net.addBaseStation( 'ap2', ssid= 'ap2-ssid', mode= 'g', channel= '1', position='90,50,0', range='30' )
    ap3 = net.addBaseStation( 'ap3', ssid= 'ap3-ssid', mode= 'g', channel= '1', position='130,50,0', range='30' )
    c1 = net.addController( 'c1', controller=Controller )

    print "*** Associating and Creating links"
    net.addLink(ap1, h1)
    net.addLink(ap1, ap2)
    net.addLink(ap2, ap3)

    print "*** Starting network"
    net.build()
    c1.start()
    ap1.start( [c1] )
    ap2.start( [c1] )
    ap3.start( [c1] )

    net.plotGraph(max_x=160, max_y=160)

    net.startMobility(startTime=0, AC='ssf')
    net.mobility('sta1', 'start', time=20, position='1,50,0')
    net.mobility('sta1', 'stop', time=79, position='159,50,0')
    net.stopMobility(stopTime=80)

    print "*** Running CLI"
    CLI( net )

    print "*** Stopping network"
    net.stop()

if __name__ == '__main__':
    setLogLevel( 'info' )
    topology()

Save the script and call in line.py. Make it executable, then run the command:

wifi:~$ sudo ./line.py

The Mininet-Wifi graph will appear, showing the station and the access points.

The line.py script running

The line.py script running

The station sta1 will sit still for 20 seconds, and then start to move across the graph from left to right for 60 seconds until it gets to the far side of the graph. The host h1 and the virtual Ethernet connections between h1, ap1 and between the three access points are not visible.

Re-starting the scenario

This simple scenario has a discreet start and stop time so, if you wish to run it again, you need to quit Mininet-WiFi, and start the script again.

For example, suppose the scenario is at its end, where the station is now at the far right of the graph window. To stop and start it again, enter the following commands:

mininet-wifi> exit
wifi:~$ sudo mn -c
wifi:~$ sudo ./line.py

More Python functions

When running a scenario with the mobility methods in the Python API, we have access to more information from Mininet-WiFi’s Python functions.

To see all access points that are within range of a station such as sta1 at any time while the scenario is running, call the inRangeAPs function:

mininet-wifi> py sta1.inRangeAPs
[<OVSSwitch ap1: lo:127.0.0.1,ap1-eth1:None pid=3862> ]

Test with iperf

To see how the system responds to traffic, run some data between host h1 and station sta1 when the scenario is started.

We’ve seen in previous examples how to use the ping program to create traffic. In this example, we will use the iperf program.

First, start the line.py script again. Then start an iperf server on the station

mininet-wifi> sta1 iperf --server &

Then open an xterm window on the host h1.

mininet-wifi> xterm h1

From the xterm window, we will start the iperf client command and create a stream of data between h1 and sta1. On the h1 xterm, run the command:

# iperf --client 10.0.0.2 --time 60 --interval 2 

Watch the iperf output as the station moves through the graph. When it passes from one access point to the next, the traffic will stop. To get the traffic running again, clear the flow tables in the access points. In the Mininet-WiFi CLI, run the command shown below:

mininet-wifi> dpctl del-flows

Traffic should start running again. As stated in Tutorial #2 above, we must clear flows after a hand off because the Mininet reference controller cannot respond correctly in a mobility scenario. The topic of configuring a remote controller to support a mobility scenario is outside the scope of this post.

Clear the flows every time the station switches to the next access point.

Stop the tutorial

Stop Mininet-Wifi and clean up the system with the following commands:

mininet-wifi> exit
wifi:~$ sudo mn -c

Mininet-WiFi example scripts

The Mininet-WiFi developers created many example scripts that show examples of most of the API extensions they added to Mininet. They placed these example scripts in the folder ~/mininet-wifi/examples/. Try running these scripts to see what they do and look at the code to understand how each feature is implemented using the Python API.

Some interesting Mininet-WiFi example scripts are:

  • adhoc shows how to set up experiments with adhoc mode, where stations connect to each other without passing through an access point.
  • simplewifitopology show the Python code that create the same topology as the default topology created my the mn --wifi command (two stations and one access point).
  • wifiStationsAndHosts creates a topology with stations and hosts
  • 2AccessPoints to create a topology with two access points connected to each other via an Ethernet link and two stations associated with each access point.
  • wifiPosition.py shows how to create a network where stations and access points are places in specific locations in virtual space.
  • wifiMobility and wifiMobilityModel show how to move stations and how mobility models can be incorporated into scripts.
  • wifiAssociationControl shows how the different values of the AC parameter affect station handoffs to access points.
  • wifimesh.py shows how to set up a mesh network of stations.
  • handover.py shows how to create a simple mobility scenario where a station moves past two access points, causing the station to hand off from one to the other.
  • multipleWlan.py shows how to create a station with more than one wireless LAN interface.
  • wifiPropagationModel.py shows how to use propagation models that impact how stations and access points can communicate with each other over distance.
  • wifiAuthentication.py shows how to set up WiFi encryption and passwords on access points and stations.

Conclusion

The tutorials presented above demonstrate many of Mininet-Wifi’s unique functions. Each tutorial revealed more functionality and we stopped at the point where we were able to emulate mobility scenario featuring a WiFi station moving in a straight line past several wireless access points.

To learn more about Mininet-WiFi, go to the Mininet-WiFi wiki page. Also, read through posts on the Mininet-WiFi mailing list, which is very active and is a useful source of more information about Mininet-WiFi.

I am looking for an OpenFlow controller that will support WiFi switches using OpenFlow 1.3, which is the version of OpenFlow supported by Mininet and Mininet-WiFi. If you know of any, please add a comment to this post.


  1. In the Mininet examples folder, we find a mobility.py script that demonstrates methods that may be used to create a scenario where a host connected to one switch moves its connection to another switch 

  2. Some mac80211_hwsim practical examples and supporting information are at the following links: lab, thesis, hostapd, wpa-supplicant, docs-1, and docs-2 

  3. From http://teampal.mc2lab.com/attachments/685/C2012-12.pdf 

How To Install dCore Linux in a virtual machine

$
0
0

dCore Linux is a minimal Linux system based on the Tiny Core Linux system. Like Tiny Core Linux, dCore loads its file system entirely into RAM, which should provide good performance in large network emulation scenarios running on a single host computer.

tiny-core-linux_kraked

dCore Linux allows users to install additional software from the Debian or Ubuntu repositories, instead of using the pre-built (and often out-of-date) TCE extensions provided for Tiny Core Linux. This should simplify the process of building network appliances for use in a network emulator, as you will not need to compile and build your own extensions, or use out-of-date pre-built extensions.

dCore Linux is designed to run as a “live” Linux system from removable media such as a CD or a USB drive but, for my use, I need to install it on a hard drive. Currently available instructions for installing dCore Linux onto a hard drive are incomplete and hard to follow. This post lists a detailed procedure to install dCore Linux on a virtual disk image connected to a virtual machine. I use VirtualBox in this example, but any other virtual machine manager would also be suitable.

Notes about dCore

Because dCore Linux is a small, lightweight Linux operating system, it us suitable for use in network emulators that use a full virtualization stack such as Qemu/KVM or VirtualBox. dCore Linux provides the functionality of Tiny Core Linux but also makes it easier to use the latest versions of networking software because it is designed to use the Debian software repositories.

When using dCore, it is important to understand how the dCore filesystem works. Since dCore loads the entire Linux system into RAM, changes to the filesystem — such as installing new software or updating configuration files — will be lost if the system is restarted. You must understand how to save configuration changes in dCore. I discussed the topic of Tiny Core Linux persistent configuration changes in a previous post. dCore works in a similar way, but the commands used are different.

While dCore may import software from the Debian or Ubuntu repositories, it does not install software in the same way the Debian or Ubuntu installs software. dCore Linux is a version of Tiny Core Linux so it also uses the concept of extensions to install new software. dCore converts Debian or Ubuntu packages into usable SCEs (self-contained extensions). SCEs are similar to Tiny Core Linux TCZ extensions, with some additional features.

Why install dCore to a hard drive?

Why not just boot dCore from an attached virtual optical drive? Why go through the trouble of installing it on a hard drive when it was designed to run from a CDROM or USB thumb drive?

We want to run dCore in a virtual machine in a way that supports persistent configuration changes and is self-contained. We want to support cloned disk images, which is the easiest way to replicate many systems in a network emulation tool like GNS3. Installing dCore on the VM’s virtual disk, instead of attaching the dCore CD ISO image to the machine, better supports our requirements.

Problems with services

I was not able to make network services such as SSH run successfully, even though they appeared to install correctly. Obviously this limits the usefulness of dCore in virtual network emulation tools like GNS3. I will investigate how to make network services run in dCore and, if I am successful, I will write a new post on that topic, or update this post.

Instead of installing networking services, I will demonstrate installing a desktop environment in the procedure I list below.

How to install dCore

To install dCore, we perform the following steps, which are described in detail later in this post:

  • Download the dCore ISO image from the dCore web site
  • Create a virtual machine and attach the ISO image to it.
  • Partition the VM’s virtual disk and build the filesystem
  • Mount the ISO disk image and copy its contents to the VM’s virtual disk
  • Install boot loader files on the VM’s virtual disk
  • Create a boot loader configuration file with the required dCore boot codes
  • Save all configuration changes
  • Remove the ISO image from the VM
  • Reboot the VM

This above steps creates a dCore system that boots from the VM’s virtual disk. In the rest of this tutorial we will demonstrate loading Ubuntu packages to create a desktop environment with a web browser by doing the following:

  • Use the sce-import and sce-load commands to install additional software
  • Save the new software and configuration files in the persistent filesystem
  • Reboot the VM

Detailed dCore install steps

Below is the detailed procedure to install dCore Linux on virtual disk attached to a VirtualBox virtual machine.

Download the dCore ISO image

dCore files are usually available at the dCore download repository. Note that in this post, we are working with the release candidate for Ubuntu 16.04 (xenial) because it is not yet available in the standard repository so, instead of the standard releases repository, we use the release candidate repository.

Download the ISO file. For example, in this case, download dCore-xenial.iso.

Create a virtual machine and attach the ISO image to it.

In VirtualBox, create a new virtual machine named dCore (or a name of your choice). Click on the “New VM” icon and then enter the VM name and type. dCore is a 32-bit operating system.

Create new dCore VM in VirtualBox

Create new dCore VM in VirtualBox

Next, set the memory size. I chose 256 MB. You may modify this later if you need to.

Set memory size

Set memory size

Create a virtual hard disk onto which we will install dCore for the VM.

Create disk

Create disk

The disk file type can be any of the available options. I chose the VDI format because it is the default setting.

Select disk type

Select disk type

Either dynamically allocated disk size of fixed disk size is OK to use. I chose to use a fixed disk size because it should offer slightly better performance, in terms of memory and processor resources used, when the VM is running.

Fixed disk size offers better performance

Fixed disk size offers better performance

Next, give the virtual hard disk a name and set its size. I chose 1 GB because I will split it into two partitions and also need space to install a desktop environment. If I was just installing network software, I would choose 512 MB.

Set disk size

Set disk size

Now we have created a new virtual machine. We need to attach to the VM the dCore ISO file we previously downloaded as a virtual optical drive image. Click on the Settings icon.

New virtual machine

New virtual machine

In settings, go to the Storage panel and select the empty virtual optical drive. Then click on the little optical disk icon on the far right side of the panel. Select Choose Virtual Optical Disk File from the drop-down menu that appears.

Attach CDROM

Attach CDROM

Navigate to the folder that contains the dCore ISO file and select it. Click on Open.

Select dCore ISO file

Select dCore ISO file

Now we see the dCore-xenial.iso file is attached to the virtual optical disk drive. Click OK.

See dCore CDROM ISO image attached

See dCore CDROM ISO image attached

Now we are ready to start the VM. Click on the Start icon.

dCore VM ready to boot from CDROM

dCore VM ready to boot from CDROM

The VM will start booting. It will prompt you for boot options. At this point, we do not need any boot options so press the Enter key to proceed with booting the VM.

No boot codes needed yet. Press Enter.

No boot codes needed yet. Press Enter.

Now the VM has started. The system loaded from the virtual optical disk drive and is full loaded into RAM.

Booted

Booted

Remember that the entire file system is in RAM so any changes you make to files will be lost when you shut down or restart the VM, unless you save your changes to a persistent disk, which we will set up in the next sections.

Partition the VM’s virtual disk and build the filesystem

Partition the virtual disk image. Create a partition sda1 and a swap partition sda2. USe the fdisk command, which comes already installed in dCore.

$ sudo fdisk /dev/sda

In the fdisk command-line interface, type “m” to see the list of available commands.

Enter “n” to create a new partition on the virtual disk. Choose Partition 1 and set it’s size so it uses up 90 of 130 available cylinders on the disk.

n

Type “p” for primary partition

p

The partition number is “1”

1

Select the first cylinder on the disk for the new partition (1 to 130). I chose “1”.

1

Select the last cylinder for this partition. This will determine the partition size of the main filesystem partition. I chose “90”.

90

Repeat the process for the second partition. Start with the “n” command and create Partition 2. In this case, I chose the default values for cylinders to use up the rest of the virtual disk.

n
p
2
91
130

Make the first partition bootable. Use the “a” command and select Partition 1.

a
1

Make the second partition a swap partition. Use the “t” command, choose partition 2 and set the partition type to “82”.

t
2
82

Check the setup with the “p” command, which will list all partitions and show which ones are bootable:

p
Partition disks

Disk partitions

Write the changes with the “w” command:

w

Quit fdisk with the “q” command:

q

Format Partition 1, which is now device sda1, with an ext4 filesystem. The swap partition, device sda2, does not need a filesystem so we do nothing for sda2

$ sudo mkfs.ext4 /dev/sda1

Rebuild the filesystem table file, fstab. The rebuildfstab command adds the new partition /dev/sda1 to the /etc/fstab file.

$ sudo rebuildfstab 

See that the new disk has mount point /mnt/sda1 defined in the fstab file.

$ cat /etc/fstabsudo

To list your devices another way, use the blkid command. This shows the device UUID and label.

$ blkid

Mount the ISO disk image and copy its contents to the VM’s virtual disk

Mount the first partition we created, device sda1:

$ mount /mnt/sda1

Create a /boot directory for the dCore files:

$ sudo mkdir /mnt/sda1/boot

Copy the kernel and initrd files into the /boot folder. Get these files from the ISO disk image, which we previously attached to this VM as a CDROM in the VirtualBox Manager.

First check the device name of the CDROM:

$ blkid

This shows that the CDROM device is sr0. Mount the CDROM:

$ mount /mnt/sr0

Now look at the contents of the boot directory on the CDROM:

$ ls /mnt/sr0/boot/

We see the dCorexenial.gz initRD file and the vmlinuzxenial kernel file. Copy these files from the CDROM to the boot directory on the new hard disk:

$ sudo cp -p /mnt/sr0/boot/* /mnt/sda1/boot

Ignore any warnings about omitting directories. List the /boot directory to verify the correct files are there:

$ ls /mnt/sda1/boot/

Install a bootloader on the VM’s virtual disk

In this case, we will install the extlinux bootloader because it is commonly used for small distributions like TinyCore Linux and because it is simple to configure. For more details, see How to Install Extlinux and How to Install Extlinux on a USB Drive.

First, configure the hard disk partition with a label. This makes it easier to refer to the partition in configuration files.

We need the e2label program to label the disk partition. The e2label program is in the e2fsprogs package.

$ sce-import e2fsprogs
$ sce-load e2fsprogs

We choose to use the label dCore. Any name may be used for the label.

$ sudo e2label /dev/sda1 "dCore"

Rebuild the filesystem table so the system learns about the new label.

$ sudo rebuildfstab

Check for a label again. Run the blkid command again. See that /dev/sda1 has the label dCore.

Next, make a directory for extlinux.

$ sudo mkdir /mnt/sda1/boot/extlinux

Use the sce-import command to install extlinux from the Ubuntu repositories:

$ sce-import extlinux

Load the new extlinux extension.

$ sce-load extlinux

Now, use extlinux to install the files that will create a bootloader on the partition sda1.

$ sudo extlinux --install /mnt/sda1/boot/extlinux

This installs extlinux bootloader files onto the virtual disk drive.

Install the master boot record (MBR) file

$ sudo dd if=/usr/lib/EXTLINUX/mbr.bin of=/dev/sda

Create a bootloader configuration file

Create a bootloader configuration file with the required dCore boot codes. In the extlinux directory we previously created, create and edit the file extlinux.conf:

$ cd /mnt/sda1/boot/extlinux
$ sudo vi extlinux.conf

Then enter the following text in the file:

default dCore
label dCore
kernel /boot/vmlinuzxenial
append initrd=/boot/dCorexenial.gz tce=sda1

Save the file and quit the editor.

Save all configuration changes

Set the TCE drive. dCore needs to know which to partition it should save persistent configuration changes. In this scenario, the tce-setdrive command will select the sda1 partition.

$ tce-setdrive

This creates a tce directory on /mnt/sda1 and populates it with the TCE and SCE files.

Save the configuration changes on persistent storage on sda1. Use the backup command:

$ backup

Remove the ISO image from the VM

We no longer need the original ISO image connected to the virtual machine. To remove the ISO image in VirtualBox, first power off the VM:

$ sudo poweroff

In VirtualBox manager, select the dCore VM and click on Settings. Go to the Storage panel. Remove the dCore ISO file from the virtual optical drive. This way, the VM will always boot from the virtual hard drive we have set up.

Remove CDROM 1

Remove ISO file from the virtual optical disk drive

Verify the virtual optical disk drive is now empty.

Remove CDROM

Verify the virtual optical disk drive is empty

Boot the VM

Start the VM again by clicking on the Start icon in VirtualBox.

The new dCore system will boot up from the virtual disk. Ignore warning about floppy disk drivers. This is not a serious issue and I have not yet found out how to stop the system from looking for a floppy drive.

Booting again

Booting again

The system is ready to use. Now we may install any extra software we require.

Install additional software

Let us install new software in this dCore image. For example, we will install a desktop environment and a GUI application, such as a web browser. Use the sce-import and sce-load commands to install additional software

When installing software that shares a lot of dependencies, you can use up less disk space if you create one extension from multiple packages. Create a file listing the packages you wish to install and then use the -l option to install the packages listed in that file.

First create the file in a temporary directory. We do not need the file after the install is completed.

$ vi /tmp/xdesk

We will install packages required for the FLWM desktop environment and the Midori web browser. Enter the following text into the file:

Xprogs
xorg-all
wbar
flwm
midori

Save the file and exit the editor.

Then, run the sce-import command to load all the packages described into the files as one combined extension. The -l option specifies we will look for the package names in a file. The -b option tell the program to add the new extension to the boot list file. The name of the extension will be the same name as the file.

$ sce-import -lb /tmp/xdesk

Press Enter key at prompt to begin (or Ctrl-C to abort).

When the install is completed, view the boot list file, sceboot.lst.

$ cat /etc/sysconfig/tcedir/sceboot.lst

Check that the new extension name is in the file. The file contents should be:

xdesk

Next, edit the boot codes in the extlinux.conf file. Add the boot code desktop=flwm.

$ cd /mnt/sda1/boot/extlinux
$ sudo vi extlinux.conf

The file should now look like:

default dCore
label dCore
kernel /boot/vmlinuzxenial
append initrd=/boot/dCore xenial.gz tce=sda1 desktop=flwm

Save the new software and configuration files

Save the new software and configuration files in the persistent filesystem. The backup command will save system changes to the hard drive.

$ backup

Reboot the VM

Reboot the VM with the reboot command:

$ sudo reboot

When this system reboots, it will start with the FLWM desktop environment.

dCore FLWM desktop environment

dCore FLWM desktop environment

Now you should see a desktop. We previously installed the Midori web browser so you may launch it to demonstrate that the software will run correctly.

Conclusion

We installed dCore Linux on a VirtualBox virtual machine so that is will boot from the virtual machine’s virtual disk image (VDI). This system should be useful as a lightweight virtual network appliance in network emulation tools such as GNS3.

However, I cannot yet use this dCore system as a network appliance in network emulations tools like GNS3 because I was not able to make services such as SSH run successfully, even though they appeared to install correctly. I demonstrated installing a desktop environment, instead. I will investigate how to make network services run in dCore and, if I am successful, I will write a new post on that topic.

How to build a network of Linux routers using quagga

$
0
0

This post lists the commands required on each node to build a network of three Ubuntu Linux routers. Each router is connected to the other two routers and is running quagga. Each router is also connected to a PC running Ubuntu Linux.

three-nodes-kr

I use this network configuration to evaluate network emulators and open-source networking software in a simple scenario. Readers may find these commands useful in building their own configuration scripts.

I provide “copy and paste” commands so the network can be configured quickly.

Creating a basic topology

The physical — or virtual — network installation and the management network setup is outside the scope of this post. The method used to build the lab topology depends on the equipment, and/or the network emulator and hypervisor technology you are using.

I assume you already have six machines running and connected in a network as shown above, and I assume you have a management network set up so that each machine can communicate with the host computer and with the Internet.

Router configuration

Each router needs to install the quagga router package, configure quagga, and then configure the network using the quagga VTY shell. Optionally, quagga daemon configuration files may be created.

Router-1

Skip to Copy-and-paste shell commands below if you want to quickly configure the node Router-1. This section shows the commands to configure Router-1 step by step.

Install the quagga package and then configure the Quagga VTY shell. This will create the basic setup for a router.

Enter the commands:

$ sudo su
# apt-get update
# apt-get install quagga quagga-doc

Then, configure the Quagga daemons by editing the file /etc/quagga/daemons and start the zebra and ospfd daemons.

# nano /etc/quagga/daemons

Modify the file so it looks like:

zebra=yes
bgpd=no
ospfd=yes
ospf6d=no
ripd=no
ripngd=no
isisd=no
babeld=no

Save the file and quit the editor.

Create config files for the zebra and ospfd daemons.

# cp /usr/share/doc/quagga/examples/zebra.conf.sample /etc/quagga/zebra.conf
# cp /usr/share/doc/quagga/examples/ospfd.conf.sample /etc/quagga/ospfd.conf
# chown quagga.quaggavty /etc/quagga/*.conf
# chmod 640 /etc/quagga/*.conf

Start Quagga:

# /etc/init.d/quagga start

Set up environment variables so we avoid the vtysh END problem. Edit the /etc/bash.bashrc file:

# nano /etc/bash.bashrc

Add the following line at the end of the file:

export VTYSH_PAGER=more

Save the file and quit the editor. Then, edit the /etc/environment file:

# nano /etc/environment

Then add the following line to the end of the file:

VTYSH_PAGER=more

Save the file and quit the editor.

Start the Quagga shell with the command vtysh on Router-1:

# vtysh

Enter the following Quagga commands:

configure terminal
router ospf
 network 192.168.1.0/24 area 0
 network 192.168.100.0/24 area 0 
 network 192.168.101.0/24 area 0 
 passive-interface enp0s3    
 exit
interface enp0s3
 ip address 192.168.1.254/24
 exit
interface enp0s8
 ip address 192.168.100.1/24
 exit
interface enp0s9
 ip address 192.168.101.2/24
 exit
exit
ip forward
write
exit
Router-1 copy-and-paste shell commands

If you wish to copy-and-paste commands to quickly configure Router-1, then skip the previous section and enter the following commands:

sudo su
apt-get update
apt-get install quagga quagga-doc
cp /usr/share/doc/quagga/examples/zebra.conf.sample /etc/quagga/zebra.conf
cp /usr/share/doc/quagga/examples/ospfd.conf.sample /etc/quagga/ospfd.conf
chown quagga.quaggavty /etc/quagga/*.conf
chmod 640 /etc/quagga/*.conf
sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons
sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons
echo 'VTYSH_PAGER=more' >>/etc/environment 
echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc
cat >> /etc/quagga/ospfd.conf << EOF
interface enp0s3
interface enp0s8
interface enp0s9
interface lo
router ospf
 passive-interface enp0s3
 network 192.168.1.0/24 area 0.0.0.0
 network 192.168.100.0/24 area 0.0.0.0
 network 192.168.101.0/24 area 0.0.0.0
line vty
EOF
cat >> /etc/quagga/zebra.conf << EOF
interface enp0s3
 ip address 192.168.1.254/24
 ipv6 nd suppress-ra
interface enp0s8
 ip address 192.168.100.1/24
 ipv6 nd suppress-ra
interface enp0s9
 ip address 192.168.101.2/24
 ipv6 nd suppress-ra
interface lo
ip forwarding
line vty
EOF
/etc/init.d/quagga start

I will configure the remaining routers with the quick shell commands so you can copy and paste the configuration for each router.

Router-2

On Router-2, install quagga and configure OSPF on the router’s interfaces. Copy-and-paste the following commands into the Router-2 terminal window:

sudo su
apt-get update
apt-get install quagga quagga-doc
cp /usr/share/doc/quagga/examples/zebra.conf.sample /etc/quagga/zebra.conf
cp /usr/share/doc/quagga/examples/ospfd.conf.sample /etc/quagga/ospfd.conf
chown quagga.quaggavty /etc/quagga/*.conf
chmod 640 /etc/quagga/*.conf
sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons
sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons
echo 'VTYSH_PAGER=more' >>/etc/environment 
echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc
cat >> /etc/quagga/ospfd.conf << EOF
interface enp0s3
interface enp0s8
interface enp0s9
interface lo
router ospf
 passive-interface enp0s3
 network 192.168.2.0/24 area 0.0.0.0
 network 192.168.100.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
line vty
EOF
cat > /etc/quagga/zebra.conf << EOF
interface enp0s3
 ip address 192.168.2.254/24
 ipv6 nd suppress-ra
interface enp0s8
 ip address 192.168.100.2/24
 ipv6 nd suppress-ra
interface enp0s9
 ip address 192.168.102.2/24
 ipv6 nd suppress-ra
interface lo
ip forwarding
line vty
EOF
/etc/init.d/quagga start
Router-3

On Router-3 install quagga and configure OSPF on the router’s interfaces. Copy-and-paste the following commands into the Router-3 terminal window:

sudo su
apt-get update
apt-get install quagga quagga-doc
cp /usr/share/doc/quagga/examples/zebra.conf.sample /etc/quagga/zebra.conf
cp /usr/share/doc/quagga/examples/ospfd.conf.sample /etc/quagga/ospfd.conf
chown quagga.quaggavty /etc/quagga/*.conf
chmod 640 /etc/quagga/*.conf
sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons
sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons
echo 'VTYSH_PAGER=more' >>/etc/environment 
echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc
cat >> /etc/quagga/ospfd.conf << EOF
interface enp0s3
interface enp0s8
interface enp0s9
interface lo
router ospf
 passive-interface enp0s3
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.101.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
line vty
EOF
cat > /etc/quagga/zebra.conf << EOF
interface enp0s3
 ip address 192.168.3.254/24
 ipv6 nd suppress-ra
interface enp0s8
 ip address 192.168.101.1/24
 ipv6 nd suppress-ra
interface enp0s9
 ip address 192.168.102.1/24
 ipv6 nd suppress-ra
interface lo
ip forwarding
line vty
EOF
/etc/init.d/quagga start

PC configuration

Each PC in the network needs to configured with an IP address and a default route.

PC-1

Skip to copy-and-paste shell commands below if you want to quickly configure the node PC-1. This sections shows the commands step by step, for clarity.

In the PC-1 xterm window, use a text editor to add the following lines to the /etc/network/interfaces file:

$ sudo su
# nano /etc/network/interfaces

Add the following lines to the file, then save the file:

auto enp0s3
iface enp0s3 inet static
   address 192.168.1.1
   netmask 255.255.255.0

Then, add a static route the sends all traffic in the 102.168.0.0/16 network out enp0s3.

# ip route add 192.168.0.0/16 via 192.168.1.254 dev enp0s3

To make this static route available after a system reboot, add the command to the /etc/rc.local file

# echo 'ip route add 192.168.0.0/16 via 192.168.1.254 dev enp0s3' >>/etc/rc.local

Restart the networking service to make the configuration change operational:

# /etc/init.d/networking restart
PC-1 copy-and-paste shell commands

If you wish to copy-and-paste commands to quickly configure PC-1, then enter the following commands:

sudo su
cat >> /etc/network/interfaces << EOF 
auto enp0s3
iface enp0s3 inet static
   address 192.168.1.1
   netmask 255.255.255.0
EOF
ip route add 192.168.0.0/16 via 192.168.1.254 dev enp0s3
echo 'ip route add 192.168.0.0/16 via 192.168.1.254 dev enp0s3' >>/etc/rc.local
/etc/init.d/networking restart

The remaining PC nodes are configured in the same way, with different IP addresses. I list shell commands that may be copied and pasted to quickly configure the nodes:

PC-2

On PC-2, add the interface configuration to the network interfaces file and set up a static route:

sudo su
cat >> /etc/network/interfaces << EOF 
auto enp0s3
iface enp0s3 inet static
   address 192.168.2.1
   netmask 255.255.255.0
EOF
ip route add 192.168.0.0/16 via 192.168.2.254 dev enp0s3
echo 'ip route add 192.168.0.0/16 via 192.168.2.254 dev enp0s3' >>/etc/rc.local
/etc/init.d/networking restart

PC-3

On PC-3, add the interface configuration to the network interfaces file and set up a static route:

sudo su
cat >> /etc/network/interfaces << EOF 
auto enp0s3
iface enp0s3 inet static
   address 192.168.3.1
   netmask 255.255.255.0
EOF
ip route add 192.168.0.0/16 via 192.168.3.254 dev enp0s3
echo 'ip route add 192.168.0.0/16 via 192.168.3.254 dev enp0s3' >>/etc/rc.local
/etc/init.d/networking restart

Conclusion

We listed commands that you can copy and paste to set up a simple network consisting of Ubuntu Linux computers. We can use this network to test open-source network emulators and open source networking software.

How to use VirtualBox to emulate a network

$
0
0

VirtualBox is an open-source virtual machine manager and hypervisor that may also be used as a network emulator. In addition to creating and managing individual virtual machines, VirtualBox can connect virtual machines together to emulate a network of computers and network appliances such as routers or servers. VirtualBox works on the major computing platforms: Windows, MacOS, and Linux.

net-diagram-14-3-kr

In this post, I offer a step-by-step tutorial showing how to use the VirtualBox graphical user interface to set up a network of six devices: three routers and three PCs. This tutorial will utilize some of the advanced functions supported by VirtualBox and provide you with the skills to set up a network of virtual machines on your own personal computer.

Required knowledge

I assume you, the reader, are already familiar with the VirtualBox GUI and have used it to create and run virtual machines on your personal computer, using default settings. I also assume you have a basic understanding of Linux shell commands, which will be needed to configure the Linux operating system running on the virtual routers and PCs.

If you need to refresh your knowledge about VirtualBox, the VirtualBox website provides a detailed user manual, and I have written a few posts featuring VirtualBox. See the list below:

Network topology

To build the emulated network, first create a network plan you can follow. VirtualBox does not have a drag-and-drop graphical user interface for creating networks of virtual machines so you must draw the network using another tool such as Microsoft PowerPoint, Visio, or open-source alternatives like LibreOffice Draw or Dia — or even pencil and paper.

Determine which nodes and ports connect to which networks before you start creating virtual machines. Plan how you will manage the emulated nodes. Once the network topology and IP network design is defined, build configuration plans (see the tables I use later in this post) and set up and debug the emulated network.

Create a small network of three routers, each of which is connected to a PC. The network topology you will create is shown in the figure below:

Test network topology

Test network topology

VirtualBox network topology

The VirtualBox network topology includes the interconnected guest virtual machines, the host computer, and external networks reachable from the host computer.

Each virtual machine is connected to other virtual machines by VirtualBox internal networks. I added a network adapter on each guest VM and attached it to the VirtualBox NAT interface to connect each guest VM to the host computer and to other external networks. I show the VirtualBox network topology in the figure below.

VirtualBox network with internal networks and a NAT management network

VirtualBox network with internal networks and a NAT management network

Everything in the above diagram — except for the LAN, the Internet and the router (colored red) that connects the LAN to the Internet — is running on your personal computer, represented by the laptop computer in the network diagram. Your personal computer is connected to a local area network and to the Internet via a router.

Create base virtual machines

To create the network topology, you must first create a new VM. In this case, I have already created a VM in VirtualBox. If you need to know how to install a VM in VirtualBox, please see my post about installing a Linux system in a VirtualBox VM.

The base VM

I installed Ubuntu Server 16.04 in this example. I used all the default configurations. See below that a virtual machine named Ubuntu server appears in the VirtualBox Manager window.

Virtual Machine *Ubuntu server* ready to use

Virtual Machine *Ubuntu server* ready to use

Note: I used the default hostname for the server. The hostname is ubuntu. If you chose a different host name, you will need to modify some of the commands I list later in this tutorial.

Clone virtual machines

Cloning a virtual machine is the easiest way to create more guest virtual machines on the host computer.

To create the PC and Routers for our network emulation, clone the Ubuntu server VM you created in the previous step. Right-click on Ubuntu server and select Clone from the menu.

Clone the virtual machine

Clone the virtual machine

In the dialogue box, enter the new VM name and be sure to select the check box to Reinitialize the MAC address of all network cards. You must ensure that the MAC addresses will be different on each cloned VM.

Name the VM and reinitialize the MAC addresses

Name the VM and reinitialize the MAC addresses

Choose Linked Clone in the next dialogue box. This will keep the cloned VM file size small.

Select linked clone

Select linked clone

Click the Clone button and the new VM will appear in the VirtualBox Manager window.

Select linked clone

First linked clone created

Repeat the steps for each VM you require. In this case, create a total of six virtual machines, each of which is a linked clone of the Ubuntu server VM. Choose virtual machine names to match the node names defined in the network diagram.

Six linked clone VMs created

Six linked-clone VMs created

Now you have six virtual machines. Each VM needs to be set up with network interfaces and connected to VirtualBox internal networks to create a network topology.

Create VirtualBox internal networks

The VirtualBox graphical user interface supports only four network adapters for each VM. This limits the complexity of network scenarios you can create. Fortunately, VirtualBox really supports up to thirty-six network adapters per VM. These additional network adapters may be configured using the VirtualBox command-line interface, which is a topic for another post. For now, limit yourself to using the four adapters supported on each VM by the VirtualBox GUI.

Each network adapter may be enabled or disabled. If enabled, the adapter may be configured to connect to one of the many different types of interfaces provided by VirtualBox.

To connect two virtual machines to each other, use the Internal Network interface type.

Select one of the virtual machines in the VirtualBox Manager window and click on Settings. Then, in the settings window, click on Network. In the example below, you will configure Network Adapter 2 on the Router-1 virtual machine.

Click on the Enable Network Adapter check box, if it is not already checked. Then click on Attached To and select internal Network.

Select internal network

Select internal network

Next, give the internal network a name. The name must match with the name configured on the corresponding network adapter on other the VM to be connected to this VM.

Enter network name

Enter network name

Repeat this process for each node. The routers each use three of the four available network adapters to connect to internal networks. The PCs each use one network adapter to connect to internal networks.

I used the internal network names shown in the table below to create point-to-point connections between each VM in the network topology.

Node VirtualBox interface VirtualBox Network Type Name
PC-1 Adapter 2 Internal intnet-1
PC-2 Adapter 2 Internal intnet-2
PC-3 Adapter 2 Internal intnet-3
Router-1 Adapter 2 Internal intnet-1
Adapter 3 Internal intnet-100
Adapter 4 Internal intnet-101
Router-2 Adapter 2 Internal intnet-2
Adapter 3 Internal intnet-100
Adapter 4 Internal intnet-102
Router-3 Adapter 2 Internal intnet-3
Adapter 3 Internal intnet-101
Adapter 4 Internal intnet-102

Create management network

By default, VirtualBox connects the first network adapter on each virtual machine to the VirtualBox NAT interface. I use the VirtualBox NAT interface as a “management network” that enables each guest node to connect to external networks and, with port forwarding enabled, to the host operating system.

TCP port forwarding

The VirtualBox NAT interface is a NAT firewall that connects guest virtual machines to the host computer’s local area network. It supports DHCP configuration of IP addresses.

Because the virtual machines are hidden behind a NAT firewall, the host computer cannot initiate connections to them. To connect from the host computer to the virtual machines using SSH, you must set up TCP port forwarding on each virtual machine.

TCP port forwarding creates a hole in the NAT firewall through which the host computer or other clients from the local area network may initiate connections to the virtual machines.

The default SSH port on each guest virtual machine is TCP port 22. Map unused TCP port numbers on the host computer to port 22 on each guest virtual machine. Any unassigned or unreserved TCP port numbers may be used on the host computer. I prefer to use TCP port numbers between between 14415 and 14935 which provides 520 contiguous unassigned TCP port numbers1.

To see all assigned TCP port numbers, see http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml.

Configure port forwarding on NAT interfaces

On each virtual machine, click on Settings, then click on the Network tab in the settings window. Select the tab for Adapter 1. Expand the Advanced network panel and click on Port Fowarding.

Advanced settings: Click on Port Forwarding

Advanced settings: Click on Port Forwarding

The Port Forwarding Rules window appears. Click on the green plus sign to add a new rule.

Port forwarding window

Port forwarding window

Give the rule a name. Any name may be used. I call the rule “SSH”. The protocol is “TCP”. Leave the IP address fields blank. The Guest Port is “22”. The Host Port is any TCP port number available on the host computer. In this example, I use port number “14601”.

Router-1 reachable using port 14601

Router-1 reachable using port 14601

Repeat the process and set up NAT interfaces with port forwarding on each virtual machine. To make it easier to remember port numbers, I assigned TCP port numbers to PCs starting with port number 14501 and I assigned port numbers to routers starting with port number 14601.

The table below shows the NAT interfaces on each machine and the TCP port forwarding rules for each interface.

Network Node Interface Rule Name Host IP Host Port Guest IP Guest Port
PC-1 Adapter 1 SSH (blank) 14501 (blank) 22
PC-2 Adapter 1 SSH (blank) 14502 (blank) 22
PC-3 Adapter 1 SSH (blank) 14503 (blank) 22
Router-1 Adapter 1 SSH (blank) 14601 (blank) 22
Router-2 Adapter 1 SSH (blank) 14602 (blank) 22
Router-3 Adapter 1 SSH (blank) 14603 (blank) 22

When I need to connect to a virtual network node from my host computer, I use IP address of the host computer’s loopback interface (or just hostname localhost) and the host TCP port number in the table listed above. You cannot use VM’s IP addresses because it is hidden behind the NAT Firewall. Also, the DHCP server built into VirtualBox’s NAT interface will assign the same IP address to each VM’s attached network adapter. VirtualBox isolates each management interface so this is not a problem and the NAT function ensures that each VM appears to have a different IP on the LAN side of the NAT.

Configuring management interface on virtual machines

All the virtual machines used in this tutorial are clones of a base virtual machine. I created the base virtual machine, Ubuntu Server, while its first network adapter was connected to the NAT interface, which is the default configuration for VMs in VirtualBox. The Ubuntu Server installation scripts configured the system to use DHCP on the first ethernet interface enp0s3. So the base virtual machine gets IP configuration from the VirtualBox NAT interface’s built-in DHCP server.

Each clone VM you created inherits the same configuration from the base VM so each VM should already be have interface enp0s3 set up and running. You do not need to modify any configuration files on the virtual machines to enable them to connect to the NAT interface2.

Start virtual machines

Now you may start the network emulation scenario by starting all the virtual machines.

As you can see in the figures below, you have a lot of virtual machines in the VirtualBox VM Manager. You need a way to keep track of the virtual machines created for the network emulation project so you don’t lose track.

VirtualBox allows you to group virtual machines together. Set up the virtual machines created for the network emulation scenario in a group so you can start them all together and so you do not mix them up with other VMs you may have defined in the VirtualBox GUI.

To group VMs in VirtualBox, hold down the Shift key and select each VM that will be included in the group. Then right-click on the selected VMs and select Group from the menu.

Group VMs together

Group VMs together

VirtualBox draws a box around the group and gives the group a name, “New group”.

Group of VMs

Group of VMs

Change the name of the group by double-clicking on the group name and typing a new name. You may also collapse the group to hide its contents by clicking on the small chevron icon to the left of the group name.

Group collapsed

Group collapsed

When you select the group by clicking on the group name, you may apply VirtualBox commands like Start or Stop to the entire group. This makes it easy to start your network emulation scenario quickly.

In this example, select the group and then click on the green arrow to start the network emulation.

Connect to each virtual machine

Use SSH to log into virtual machines after you start them in the network emulation scenario. It will take a few minutes for all the virtual machines to start.

To log into any running virtual machine, use the host computer’s IP address and the host port number assigned to the virtual machine:

$ ssh -l <userid> -p <port number> <IP address>

The -l option specifies the userid used to login to the node. In this case, when I installed the guest operating system on each node, I chose the userid brian so that is the userid I used in the SSH command.

The -p option specified the host port number. The host port is a TCP port currently listening on the host computer that will forward traffic to port 22 on the associated virtual machine. I use the host port numbers I assigned for the virtual machines in the table above.

I use localhost as the IP address because I am running the command on the host computer. Alternatively, I could use the host computer’s loopback address 127.0.0.1.

Open six terminal windows, one for each VM. In each window, use SSH to connect to a different VM. I enter the commands shown below into each VM’s terminal window (or use Putty if you run Microsoft Windows):

Terminal Virtual Machine Command
1 PC-1 ssh -l brian -p 14501 localhost
2 PC-2 ssh -l brian -p 14502 localhost
3 PC-3 ssh -l brian -p 14503 localhost
4 Router-1 ssh -l brian -p 14601 localhost
5 Router-2 ssh -l brian -p 14602 localhost
6 Router-3 ssh -l brian -p 14603 localhost

The first Ethernet interface on each virtual machine is already configured to connect to a DHCP server so you should be able to SSH into each VM using the commands in the table above. If SSH will not work, check the IP configuration of the first Ethernet interface, enp0s3, on each virtual machine.

Configure and test network nodes

Now that all the virtual machines are running, configure their network interfaces and routing protocols.

Network configuration

Each terminal is now connected to a Linux shell on each virtual machine. Configure the network interfaces on each machine. On the routers, you also need to install routing software and enable networking protocols.

Using the network topology as a guide, make a table of IP addresses to be used to configure the ports on each virtual machine. In this example, I used the table shown below:

Node Linux interface name IP address to be assigned
PC-1 enp0s3 192.168.1.1/24
enp0s8 DHCP
PC-2 enp0s3 192.168.2.1/24
enp0s8 DHCP
PC-3 enp0s3 192.168.3.1/24
enp0s8 DHCP
Router-1 enp0s3 192.168.1.254/24
enp0s8 192.168.100.1/24
enp0s9 192.168.101.2/24
enp0s10 DHCP
Router-2 enp0s3 192.168.2.254/24
enp0s8 192.168.100.2/24
enp0s9 192.168.102.2/24
enp0s10 DHCP
Router-3 enp0s3 192.168.3.254/24
enp0s8 192.168.101.1/24
enp0s9 192.168.102.1/24
enp0s10 DHCP

See below for the configuration commands you may copy-and-paste into each VM’s terminal window to set up the network.

See my post about how to build a network of Linux routers using quagga if you need explanations about how these commands work.

PC-1

On PC-1, change the hostname, add the interface configuration to the network interfaces file and set up a static route:

sudo su

Enter your password. Then copy and paste the following commands into the terminal window:

bash <<EOF2
sed -i 's/ubuntu/pc1/g' /etc/hostname
sed -i 's/ubuntu/pc1/g' /etc/hosts
hostname pc1
cat >> /etc/network/interfaces << EOF 
auto enp0s8
iface enp0s8 inet static
   address 192.168.1.1
   netmask 255.255.255.0
up route add -net 192.168.0.0/16 gw 192.168.1.254 dev enp0s8
EOF
/etc/init.d/networking restart
exit
EOF2

Reboot the node:

sudo reboot

Then log back into the node from the host computer using SSH, using the SSH command shown above.

ssh -l brian -p 14501 localhost
PC-2

On PC-2, change the hostname, add the interface configuration to the network interfaces file and set up a static route. Copy-and-paste the following commands into the PC-2 terminal window:

sudo su

Enter your password. Then copy and paste the following commands into the terminal window:

bash <<EOF2
sed -i 's/ubuntu/pc2/g' /etc/hostname
sed -i 's/ubuntu/pc2/g' /etc/hosts
hostname pc2
cat >> /etc/network/interfaces << EOF 
auto enp0s8
iface enp0s8 inet static
   address 192.168.2.1
   netmask 255.255.255.0
up route add -net 192.168.0.0/16 gw 192.168.2.254 dev enp0s8
EOF
/etc/init.d/networking restart
exit
EOF2

Reboot the node:

sudo reboot

Then log back into the node from the host computer using SSH, using the SSH command shown above.

ssh -l brian -p 14502 localhost
PC-3

On PC-3, change the hostname, add the interface configuration to the network interfaces file and set up a static route. Copy-and-paste the following commands into the PC-1 terminal window:

sudo su

Enter your password. Then copy and paste the following commands into the terminal window:

bash <<EOF2
sed -i 's/ubuntu/pc3/g' /etc/hostname
sed -i 's/ubuntu/pc3/g' /etc/hosts
hostname pc3
cat >> /etc/network/interfaces << EOF 
auto enp0s8
iface enp0s8 inet static
   address 192.168.3.1
   netmask 255.255.255.0
up route add -net 192.168.0.0/16 gw 192.168.3.254 dev enp0s8
EOF
/etc/init.d/networking restart
exit
EOF2

Reboot the node:

sudo reboot

Then log back into the node from the host computer using SSH, using the SSH command shown above.

ssh -l brian -p 14503 localhost
Router-1 copy-and-paste shell commands

On Router-1, change the hostname, install quagga, and configure OSPF on the router’s interfaces. Copy-and-paste the following commands into the Router-1 terminal window:

sudo su

Enter your password. Then copy and paste the following commands into the terminal window:

bash <<EOF2
sed -i 's/ubuntu/router1/g' /etc/hostname
sed -i 's/ubuntu/router1/g' /etc/hosts
hostname router1
apt-get update
apt-get install quagga quagga-doc traceroute
cp /usr/share/doc/quagga/examples/zebra.conf.sample /etc/quagga/zebra.conf
cp /usr/share/doc/quagga/examples/ospfd.conf.sample /etc/quagga/ospfd.conf
chown quagga.quaggavty /etc/quagga/*.conf
chmod 640 /etc/quagga/*.conf
sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons
sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons
echo 'VTYSH_PAGER=more' >>/etc/environment 
echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc
cat >> /etc/quagga/ospfd.conf << EOF
interface enp0s8
interface enp0s9
interface enp0s10
interface lo
router ospf
 passive-interface enp0s8
 network 192.168.1.0/24 area 0.0.0.0
 network 192.168.100.0/24 area 0.0.0.0
 network 192.168.101.0/24 area 0.0.0.0
line vty
EOF
cat >> /etc/quagga/zebra.conf << EOF
interface enp0s8
 ip address 192.168.1.254/24
 ipv6 nd suppress-ra
interface enp0s9
 ip address 192.168.100.1/24
 ipv6 nd suppress-ra
interface enp0s10
 ip address 192.168.101.2/24
 ipv6 nd suppress-ra
interface lo
ip forwarding
line vty
EOF
/etc/init.d/quagga start
exit
EOF2

Reboot the node:

sudo reboot

Then log back into the node from the host computer using SSH, using the SSH command shown above.

ssh -l brian -p 14601 localhost
Router-2

On Router-2, change the hostname, install quagga, and configure OSPF on the router’s interfaces. Copy-and-paste the following commands into the Router-2 terminal window:

sudo su

Enter your password. Then copy and paste the following commands into the terminal window:

bash <<EOF2
sed -i 's/ubuntu/router2/g' /etc/hostname
sed -i 's/ubuntu/router2/g' /etc/hosts
hostname router2
apt-get update
apt-get install quagga quagga-doc traceroute
cp /usr/share/doc/quagga/examples/zebra.conf.sample /etc/quagga/zebra.conf
cp /usr/share/doc/quagga/examples/ospfd.conf.sample /etc/quagga/ospfd.conf
chown quagga.quaggavty /etc/quagga/*.conf
chmod 640 /etc/quagga/*.conf
sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons
sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons
echo 'VTYSH_PAGER=more' >>/etc/environment 
echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc
cat >> /etc/quagga/ospfd.conf << EOF
interface enp0s8
interface enp0s9
interface enp0s10
interface lo
router ospf
 passive-interface enp0s8
 network 192.168.2.0/24 area 0.0.0.0
 network 192.168.100.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
line vty
EOF
cat > /etc/quagga/zebra.conf << EOF
interface enp0s8
 ip address 192.168.2.254/24
 ipv6 nd suppress-ra
interface enp0s9
 ip address 192.168.100.2/24
 ipv6 nd suppress-ra
interface enp0s10
 ip address 192.168.102.2/24
 ipv6 nd suppress-ra
interface lo
ip forwarding
line vty
EOF
/etc/init.d/quagga start
exit
EOF2 

Reboot the node:

sudo reboot

Then log back into the node from the host computer using SSH, using the SSH command shown above.

ssh -l brian -p 14602 localhost
Router-3

On Router-3, change the hostname, install quagga, and configure OSPF on the router’s interfaces. Copy-and-paste the following commands into the Router-3 terminal window:

sudo su

Enter your password. Then copy and paste the following commands into the terminal window:

bash <<EOF2
sed -i 's/ubuntu/router3/g' /etc/hostname
sed -i 's/ubuntu/router3/g' /etc/hosts
hostname router3
apt-get update
apt-get install quagga quagga-doc traceroute
cp /usr/share/doc/quagga/examples/zebra.conf.sample /etc/quagga/zebra.conf
cp /usr/share/doc/quagga/examples/ospfd.conf.sample /etc/quagga/ospfd.conf
chown quagga.quaggavty /etc/quagga/*.conf
chmod 640 /etc/quagga/*.conf
sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons
sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons
echo 'VTYSH_PAGER=more' >>/etc/environment 
echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc
cat >> /etc/quagga/ospfd.conf << EOF
interface enp0s8
interface enp0s9
interface enp0s10
interface lo
router ospf
 passive-interface enp0s8
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.101.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
line vty
EOF
cat > /etc/quagga/zebra.conf << EOF
interface enp0s8
 ip address 192.168.3.254/24
 ipv6 nd suppress-ra
interface enp0s9
 ip address 192.168.101.1/24
 ipv6 nd suppress-ra
interface enp0s10
 ip address 192.168.102.1/24
 ipv6 nd suppress-ra
interface lo
ip forwarding
line vty
EOF
/etc/init.d/quagga start
exit
EOF2 

Reboot the node:

sudo reboot

Then log back into the node from the host computer using SSH, using the SSH command shown above.

ssh -l brian -p 14603 localhost

Testing the network

If everything is working correctly, the virtual PCs and routers in the emulated network are should be able to communicate with every other virtual PC and router in the network.

You may now perform experiments or study the operation of network protocols. For example, you may use the ping command to test IP reachability between nodes and you may also look at the routing tables or use quagga vtysh commands on the routers to see OSPF protocol status.

For example, use traceroute to see that traffic passes through the network between pc3 and pc1:

brian@pc3:~$ traceroute 192.168.1.1
traceroute to 192.168.1.1 (192.168.1.1), 30 hops max, 60 byte packets
 1  192.168.3.254 (192.168.3.254)  0.595 ms  0.618 ms  0.589 ms
 2  192.168.101.2 (192.168.101.2)  1.212 ms  1.332 ms  1.195 ms
 3  192.168.1.1 (192.168.1.1)  2.457 ms  2.607 ms  2.400 ms

Next Steps

This tutorial worked through the building blocks used to build complex network emulation scenarios. As next steps, you may enable other network protocols in the network topology and study their operation or you may create more complex scenarios using the VirtualBox command line interface.

VirtualBox network lab setup may be automated using popular open-source tools. You may find it beneficial to explore using Vagrant and/or Ansible to automate, manage, and configure VirtualBox network emulations.

Conclusion

I showed how VirtualBox may be used to emulate networks that may be used to study the operation of network protocols and to test networking software. I provided step-by-step instructions for using the VirtualBox graphical user interface to build a network of guest virtual machines that can be managed from the host computer.


  1. Reference: http://stackoverflow.com/questions/10476987/best-tcp-port-number-range-for-internal-applications 

  2. If you want to use a different network adapter for the NAT interface, edit the /etc/network/interfaces file and restart the networking service or reboot the VM 

Psimulator2 forked, updated

$
0
0

Roland Kuebert forked the psimulator2 network simulator project from the original, seemingly discontinued source and made the new version available at https://github.com/rkuebert/psimulator.

Roland posted this announcement in the comments under my psimulator2 blog post. So that his announcement receives a bit more visibility, I am re-posting his comment verbatim below:

Hi all,

Just a heads up, I forked the project from the original, seemingly discontinued source and it is available at https://github.com/rkuebert/psimulator .

I have fixed the issue preventing the use of Java 8, but I have yet to look into making a release on GitHub. You can, however, clone the repository and use gradle to build jar files – I recommend using gradle shadowJar to create jar files which can be run without specifying any further dependencies.

For the frontend, use java -jar java -jar frontend/build/libs/psimulator-frontend-master-*.jar (replace the asterisk with the exact name, the star represents the git commit you used to checkout).

For the backend, use java -jar backend/build/libs/psimulator-backend-master-*-all.jar (replace the asterisk with the exact name, the star represents the git commit you used to checkout).

Cheers
Roland


OFNet SDN network emulator

$
0
0

OFNet is a new software-defined network (SDN) emulator that offers functionality similar to the Mininet network emulator and adds some useful tools for generating traffic and monitoring OpenFlow messages and evaluating SDN controller performance.

ofnet-splash

OFNet is an open-source project that is distributed as a virtual machine (VM) image. The OFNet source code is available in the OFNet VM’s filesystem. In this post, we will use the OFNet VM provided by the OFNet developer to run SDN emulation scenarios in OFNet.

The OFNet Virtual Machine

The OFNet VM image is packaged as an OVA file which can be imported into most virtual machine managers. In this case, we are using VirtualBox. You may download the OFNet VM from this link.

The OFNet VM contains a Linux system running Ubuntu 12.04 and has the VirtualBox extensions installed.

Create a new virtual machine using the OFNet VM image, start up the VM and log in. After logging in, review the available OFNet documentation and install Wireshark.

Install the OFNet VM in VirtualBox

Import the OFNet.ova file into VirtualBox. Use the File → Import Appliance VirtualBx menu command or press <Ctrl-I>. Navigate to the location where you saved the OFNet.ova file and select it. Click Next and then click Import.

Start the VM. It will start in graphical desktop mode and ask for a login password. Login using userid: ofnet and password: ofnet.

The password is ofnet

The password is ofnet

Do not upgrade the VM

After logging into the VM, do not upgrade the VM. The Ubuntu Software Updater will ask you to upgrade. Ignore it.

The OFNet VM is based on Ubuntu 12.04. You do not need the latest version of every Ubuntu 12.04 package to perform experiments with OFNet and upgrading may cause problems on this VM. In my case, I upgraded to the latest 12.04 patches and after that the VM kept sending me error messages stating “System problem detected”. So I deleted the VM and created a new OFNet VM by importing the OFNet.ova file again.

Review the OFNet documentation

Review the OFNet documentation. The OFNet SDN network emulator is not yet fully documented but you may review the tutorial on the OFNet web site, work through the OFNet demo script provided with the OFNet VM, and read text files in the VM filesystem to learn how to set up and run OFNet SDN emulations. Below is a list of useful documents that explain how to use OFNet:

  • A brief OFNet tutorial is available at the OFNet web site, sdninsights.org.
  • The README file is available in the OFNet VM at ~/ofnet/README.
  • The tmp_doc file at ~/ofnet/documentation/tmp_doc documents how to build an OFNet topology file.
  • Look at the contents of files in the ~/ofner/demo directory to see examples of network topology files.
  • The dir_structure file at ~/ofnet/documentation/dir_structure shows the location of configuration and runtime files.
  • Launch the OFNet_Demo script in the Desktop folder of the OFNet VM. Follow the instructions in the script to walk through the main features of OFNet.

All OFNet files are in the ~/ofnet folder.

Install Wireshark

The Wireshark packet analyzer is not installed in the OFNet VM. We must install a recent version of Wireshark because we need the OpenFlow display filters. Install the newest stable version of Wireshark from the PPA as follows:

$ sudo add-apt-repository ppa:wireshark-dev/stable
$ sudo apt-get update
$ sudo apt-get install wireshark

Build and run and OFNet network emulation scenario

To run a network emulation in OFNet, users first use a text editor to create a topology file. Then they compile the topology file into a network file. Next, they start up and configure the SDN controller that will run the network scenario. Finally, OFNet uses the network file to start the network.

Create an OFNet topology file

To create an OFNet SDN emulation scenario, you must start with an OFNet topology file. The format of the file can be very simple but the topology description syntax also supports complex customized scenarios.

To learn more about the OFNet topology file syntax and the rules for creating a topology file, read the file ~/ofnet/documentation/tmp_doc in the OFNet VM. Also, you can look at the demonstration topology files in the ~/ofnet/ofnet/demo folder.

For example, a regular tree topology can be created with just a few lines of text. In this case, I used a text editor to create the file shown below:

TOPOLOGY TREE {

   3,3,3;

   CONTROLLER SECTION :

   contr0(127.0.0.1);

   ALL -- contr0;

}

This file will create a topology consisting of a tree topology three levels deep, with three leaves per node, and with three hosts as leaves at the last level

The topology file also defines the network address of the SDN controller. In this example, the SDN controller is installed on the same machine as OFNet so we use the system loopback address. If the SDN controller is running on another machine, you may enter the IP address of the other machine.

I saved the topology file in the folder ~/ofnet/demo, alongside other demonstration topology files. You can use any filename but the OFNet developer’s convention seems to be to use the filename extension “.topo”. In this case, I chose to use the filename test.topo.

Using the Topology Create command, topo_create

As an alternative to building a topology file, you may create simple standard network topologies using the topo_create command. Use the -help parameter to see all the command options.

$ topo_create -help

Help :
   topo_create -f topo_file_name -t type p1 p2 .. [-c controller_ip]

   Syntax :

   tree    :   topo_create -t tree levels children leaves [-c controller_ip]
   ring    :   topo_create -t ring nodes hosts [-c controller_ip]
   overlay :   topo_create -t overlay vswitches vms [-c controller_ip]
   matrix  :   topo_create -t matrix rows columns hosts [-c controller_ip]

For example, to create the same tree topology as shown above, type the following command:

$ topo_create -f test.topo -t tree 3 3 3

This will create a file test.topo that contains text similar to the file we previously created.

The topo_create command automatically compiles the topology file so it also creates a network file, which in this case is named test.topo.net. When using the topo_create command, you may skip the compile step described below.

Compile the OFNet topology file

To run the OFnet scenario we must compile the topology file we created above. Remember, I saved the file as: ~/ofnet/demo/test.topo.

The topology compile command, topoc, builds a network file that can be used to build the OFNet network emulation. The command parameters are the topology file name, test.topo, and the output file name. You may use any name for the output file but the OFNet developer’s convention is to use a filename with the extension, .net.

$ cd ~/ofnet/demo
$ topoc test.topo test.net

The compile process creates the network file ~/ofnet/demo/test.net* and also outputs an image of the network topology. In our case, the network appears as shown below:

OFnet topology

OFnet topology

By default, OFNet switches are named gs0, gs1, …gsx and hosts are named gh1, gh2,…ghy.

OFNet assigns Ethernet MAC addresses and IP addresses sequentially by default. The IP addresses of host gh1 is 10.0.0.1 and its MAC address is 00:00:00:00:00:01. Similarly, the IP addresses of host gh27 is 10.0.0.27 and its MAC address is 00:00:00:00:00:1B (which is 27 in hexadecimal).

Start the Floodlight SDN controller

The OFNet VM has two SDN controllers installed — Floodlight and Beacon.

In this example, I will use the Floodlight SDN controller. Start the controller using the start_floodlight_controller.sh script.

$ start_floodlight_controller.sh

You will see the controller start and it will begin posting log messages to the terminal screen. Leave this terminal window alone until you need to quit Floodlight.

OpenDaylight controller

I prefer to use the OpenDaylight SDN controller so I will cover installing and using OpenDaylight at the end of this post. I use Floodlight for now because I found that some OFNet features did not work well with OpenDaylight.

Start the OFNet network

To start the OFNet network, open a new terminal window and run the ctopo command.

$ ctopo netup test.net

The command will output the following text and the results of a ping test.

CREATING YOUR NETWORK TOPOLOGY. PLEASE WAIT.. THIS MIGHT TAKE SOME TIME

Total Switches : 13, Total Hosts : 27  

Test : hsh gh1 ping -c 3 10.0.0.10

PING 10.0.0.10 (10.0.0.10) 56(84) bytes of data.
64 bytes from 10.0.0.10: icmp_req=1 ttl=64 time=1103 ms
64 bytes from 10.0.0.10: icmp_req=2 ttl=64 time=100 ms
64 bytes from 10.0.0.10: icmp_req=3 ttl=64 time=0.138 ms

--- 10.0.0.10 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.138/401.342/1103.551/498.219 ms, pipe 2
$ 

Note that the startup ping test is hard-coded to ping from host gh1 to host gh10. So, if you build a topology with less than 10 hosts, do not worry if the startup ping test fails.

Now the network is running and you may use OFNet commands to interact with hosts.

More options

The ctopo command can also be used while the network is running to take down or bring up links, add links, delete links, and more. Use the -help option to see the command options:

$ ctopo -help
Help : ctopo
ctopo
ctopo [-c | -nd | -help] addlink/deletelink/linkup/linkdown node1 node2
ctopo netdown
ctopo [-of[1.0|1.2|1.3|all] [-pport] [-iinterface] netup image_file

The Floodlight controller GUI

To view the Floodlight controller’s graphical user interface, open a browser window and enter in the URL: http://127.0.0.1:8080/ui/index.html. This will open a page that shows the Floodlight Dashboard.

Floodlight GUI

Floodlight GUI

The dashboard shows the switches and hosts in the network. Users can get more details about each switch or host by clicking on the switch’s DPID or the host’s MAC address in the dashboard. For example, the details for switch gs0 are shown below:

Floodlight shows flows on switch

Floodlight shows flows on switch

Clicking on the Topology shows the Floodlight controller’s view of the network.

Floodlight topology view

Floodlight topology view

So we can see that OFNet allows users to experiment with different functions of SDN controllers like Floodlight.

OFNet commands

The OFNet SDN network emulator provides many built-in commands that allow users to perform functions such as: run commands on hosts, generate random pings, ping all nodes, animate openflow events, etc.

Host Shell command, hsh

The host shell command, hsh allows the user to execute commands on any host in the emulated network. For example, to make the host gh1 ping host gh24, execute the following OFNet command:

$ hsh gh1 ping -c 1 10.0.0.24

Or, to list the contents of the current directory in the gh1 host’s filesystem, enter the command:

$ hsh gh1 ls

Using hsh, the OFNet user may run commands like ip, ifconfig, ping, netperf, etc… to make configuration changes in the hosts and to generate network events.

Forwarding State command, fstate

The OFNet fstate command will output graphical information related to the last command executed in the network. For example, if we launch a ping command from host gh4 to host gh15, we execute the command:

$ hsh gh4 ping -c 1 10.0.0.15; fstate

The command opens a web browser to display and HTML file containing the forwarding state information collected by the fstate command:

Output of fstate command

Output of fstate command

The fstate command seems to be useful for demonstrating the OpenFlow forwarding state to people learning about SDN. I think this is one feature of OFNet that would be useful to educators.

OpenFlow Event command, ofevent

The OpenFlow Event command ofevent provides more information about the OpenFlow messages exchanged between the SDN controller and the network switches when a network event occurs, and also provides flow state information.

For example, if we want to see the OpenFlow events generated by a ping command from host gh18 to host gh24, we run the command:

$ ofevent hsh gh18 ping -c 1 10.0.0.24

The ofevent command creates a report consisting of flow diagrams and event diagrams to show how the controller and the network nodes communicate together to create the OpenFlow forwarding tables.

The first diagram illustrates the OpenFlow messages and flows created in the network.

OpenFlow event diagram

OpenFlow event diagram

The Control Plane Transactions chart shows OpenFlow messages between the switches and the SDN controller:

Control Plane Transactions

Control Plane Transactions

The Forwarding Diagram show the forwarding state after the flows have been set up, similar to the output of the fstate command above.

Forwarding state diagram

Forwarding state diagram

Users may hover the mouse pointer over the arrowheads in the diagrams to see what events are represented by each arrow. These diagrams and reports are useful when you are building your own OpenFlow applications and wish to see how flows are created on the network.

The ofevent command appears to be another command that would be useful to educators. However, the output of the command quickly becomes overwhelming if you try to track more than one or two events. For example, try running the command ofevent pingall and see what happens.

Traffic Generator

OFNet includes a traffic generator command, tctrl, which will source and sink different types of data between hosts in the OFNet emulated network.

Run the tctrl help command to see all the command options:

$ tctrl help
     tctrl history last_n#
     tctrl failure_history last_n#
     tctrl fail_list
     tctrl start
     tctrl restart
     tctrl stop
     tctrl exit
     tctrl (all|web|dns|nfs|lsend|ftp|telnet|ping|multicast)(on|off)
     tctrl log off|on
     tctrl set_fps fps#
     tctrl get_fps 
     tctrl multicast_server gh#
     tctrl dns_server gh#
     tctrl nfs_server gh#

The OFNet VM comes with a scripts named trafficup and trafficdown, which execute a series of tctrl commands to start traffic in the network and also starts the traffic monitoring function. In the example below we start generating and monitoring traffic.

$ trafficup
Starting traffic generator..
Erase Previous Failed Events ? (y/n)   
y
Traffic Generators Spawned

Traffic configuration file /tmp/traffic.conf does not exist.
Using defaults
Current Traffic Rate (Desired) : 100 Flows/Second
Current Traffic Rate (Actual) : 0 Flows/Second
Current Failure Rate : 0 Flows/Second
Total Flows Generated : 0 
Total Failures : 0 

Individual Interarrival Times (per host)
    DNS : 480 msecs
    Web : 1200 msecs
    Ping : 1600 msecs
    NFS : 4800 msecs
    Multi-cast : 4800 msecs
    Large-send : 12000 msecs
    FTP : 12000 msecs
    Telnet : 24000 msecs

EVENT_CLOCK_TICK_MS : 300 msecs Total hosts : 24

DNS Server (simulated) : gh2
NFS Server (simulated) : gh22
Multicast Server : gh12

Traffic generator started
All web traffic turned off
You may exit by running `trafficdown` from another window

After a while, the traffic monitoring dashboard will show the results of the traffic generated and the SDN controller behavior as shown below:

Traffic monitoring dashboard

Traffic monitoring dashboard

Each point on the X-axis of each chart above represents ten seconds. So the example above shows 60 seconds of data.

See the scripts in the ~/ofnet/bin/sh folder for more examples of using tctrl command.

To stop traffic, run the trafficdown script from another terminal window.

$ trafficdown

Viewing traffic generation logs

To view the commands that OFNet is running the hosts to generate traffic, use the tctrl history command. This is helpful if you are debugging unexpected results.

For example, the following command will output the last 4 commands that generated traffic in the emulated network:

$ tctrl history 4

279. On Host : gh2 Command : hsh gh2 netperf -H 10.0.0.2 -t UDP_RR -l -1 -- -r 128,128 > /dev/null 2>> error_traffic.log Return Status : 0

278. On Host : gh4 Command : hsh gh4 netperf -H 10.0.0.2 -t UDP_RR -l -1 -- -r 128,128 > /dev/null 2>> error_traffic.log Return Status : 0

277. On Host : gh3 Command : hsh gh3 ping -c 1 10.0.0.10 > /dev/null 2>> error_traffic.log Return Status : 0

276. On Host : gh3 Command : hsh gh3 netperf -H 10.0.0.2 -t UDP_RR -l -1 -- -r 128,128 > /dev/null 2>> error_traffic.log Return Status : 0

In the history file, we see each command “silences” its output by directing STDOUT to /dev/null but each command still logs errors by directing STDERR to the error_traffic.log file. See this explanation of the command syntax.

Other OFNet commands

Run the ofnet_cmds command to see a list of most of the OFNet commands.

$ ofnet_cmds 

-----------------------------
OFNet Quick command reference
-----------------------------

You have enter each of the command with -help option to view the detailed help.

topo_create -> Creates standard topologies - tree, ring, overlay

topoc -> Compiles a custom topology in to a n/w image

ctopo -> Start/stop network, add/delete links, bring down/up links, view current topology

hsh -> Run a command on a host

pingall -> Every host pings every other host

npings -> Generate n random pings

fstate -> Show current forwarding state

ofevent -> Run a command as an event and show what happened

event_animate -> Show the previously ofevent as an animation

trafficup -> Start traffic generator

trafficdown -> Stop traffic generator

tctrl -> View/control traffic generation

save_ofevent -> Saves a the output of last ofevent command as a zip file

view_ofevent -> To view saved ofevent as a zip file

ofclean -> Clean directories in case of errors

ofstat -> Display various statistics

Another useful command is the get_host_ip command, which will output the IP address if given the hostname as an input. For example:

$ get_host_ip gh1

Using WireShark with OFNet

Wireshark can be used to capture and display OpenFlow messages passing between the controller and the switches in the OFNet emulation. Wireshark can also capture and display network traffic passing on links between nodes in the emulated network.

To start Wireshark, run the command:

$ sudo wireshark &

To capture and display OpenFlow messages, select the loopback interface in Wireshark and start capturing.

ofnet-wireshark-002

Other system traffic is also running on the loopback interface so we need to filter the displayed packets so we see only the OpenFlow messages we want to see. Enter openflow_v1 in the display filter field as shown below. Now we see the OpenFlow messages. This is useful for analyzing controller or network application behaviors.

Captured OpenFlow traffic

Captured OpenFlow traffic

Network traffic between hosts and switches may be viewed in Wireshark if we capture traffic from an interface created by OFNet. In this example, we will capture packets on the link gs0_to_gs1 since most traffic will pass through the “backbone” of the network.

Stop the capture and then select the interface gs0_to_gs1. Then, start the capture again.

Capture on interface *gs0_to_gs1*

Capture on interface *gs0_to_gs1*

Next, run an OFNet command to generate some traffic. In the example below, we ran the following command in a terminal window:

$ hsh gh1 ping -c 1 10.0.0.24

Now we see the data passing between host gh1 and host gh24 across the link between switches gs0 and gs1 in the emulated network.

Captured network traffic on link *gs0_to_gs1*

Captured network traffic on link *gs0_to_gs1*

Quit the demo

To stop the network emulation and free up resources, use the ctopo netdown command with the same network file that you previously used to start the network as an input parameter.

$ ctopo netdown test.net

Then, quit the Floodlight SDN controller by entering the <Ctrl-C> key combination in the terminal window that was running Floodlight.

OpenDaylight SDN Controller

Now we’ve demonstrated most of OFNet’s major features. In the next sections, I discuss using OFNet to compare different SDN controllers. OpenDaylight is a popular SDN controller so I will install OpenDaylight in the OFNet VM and then show how it works in OFNet.

As I worked through the steps below, I found that OpenDaylight is not stable when interoperating with OFNet. You may find at times that it does not work as expected. I found that restarting the VM sometimes solved problems with OpenDaylight and OFNet.

Install Java 8

To install the latest version of the OpenDaylight SDN controller, we must first install Java version 8.

Install Java 8 since Boron requires it and Ubuntu 12.04 comes with java 6

$ sudo add-apt-repository ppa:openjdk-r/ppa
$ sudo apt-get update 
$ sudo apt-get install openjdk-8-jdk
$ echo "export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64" >> ~/.bashrc
$ source ~/.bashrc

Run command to verify correct version of Java will be used:

$ sudo update-alternatives --config java

The output should look like:

  Selection    Path                                            Priority   Status
------------------------------------------------------------
* 0            /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java   1069      auto mode
  1            /usr/lib/jvm/java-6-openjdk-amd64/jre/bin/java   1061      manual mode
  2            /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java   1069      manual mode

The asterix indicates that java-8 is the default version of Java. Press <Enter> to continue.

Install OpenDaylight

Install OpenDaylight on the OFNet VM by running the commands shown below. I describe these commands in more detail in my post about Using OpenDaylight with Mininet.

First choose a directory where you will download OpenDaylight (I chose my home directory), download the OpenDaylight container and extract it from the compressed archive.

$ cd ~
$ wget https://nexus.opendaylight.org/content/groups/public/org/opendaylight/integration/distribution-karaf/0.5.0-Boron/distribution-karaf-0.5.0-Boron.tar.gz
$ tar -xvf distribution-karaf-0.5.0-Boron.tar.gz

Take a VirtualBox snapshot of the OFNet VM before you run OpenDaylight and install features in OpenDaylight. In the future, if you want to start from a clean install of OpenDaylight you will use this snapshot.

Run OpenDaylight and install useful feature bundles:

$ cd distribution-karaf-0.5.0-Boron
$ ./bin/karaf
OpenDaylight console

OpenDaylight console

Next, install the bundles that enable functionality similar to switched Ethernet network, enable the DLUX OpenDaylight GUI, and enable host tracking:

opendaylight-user@root> feature:install odl-restconf odl-l2switch-switch-ui odl-mdsal-apidocs odl-dlux-all

Once installed, these bundles will start whenever OpenDaylight is started. They do not need to be re-installed after shutting down OpenDaylight. In fact, installed bundles cannot be easily removed. If you need to go back to the OpenDaylight base configuration, restart the OFnet VM from the VirtualBox snapshot you created above.

Remember: When you want to stop the controller, enter the key <ctrl-d> combination or type system:shutdown or logout at the opendaylight-user@root> prompt.

Use OpenDaylight with OFNet

After installing OpenDaylight in the previous section, it should be controlling the switches in the OFNet emulated network.

While testing OpenDaylight in the same way I tested Floodlight above, I noticed that OFNet does not track OpenFlow messages generated by OpenDaylight in the same way it tracks OpenFlow messages generated by Floodlight.

ofevent and fstate commands with OpenDaylight

The ofevent and fstate commands do not work when using OpenDaylight to control the network. They do not show any flows or forwarding status changes. I think that OFNet needs to be updated to correctly parse OpenFlow messages generated by OpenDaylight.

Using the OpenDaylight GUI

In addition to seeing that traffic can be sent from one host to another, we can use the OpenDaylight GUI to see how OpenDaylight is discovering nodes in the emualated network.

Open a browser window and type in the OpenDaylight GUI URL, which is http://127.0.0.1:8181/index.html. The OpenDaylight DLUX GUI will appear. We should see the topology of switches.

OpenDaylight network view at startup

OpenDaylight network view at startup

OpenDaylight does not automatically discover hosts. But when it received a packet from a host it can discover the host. Here for example is the change to the OpenDaylight’s view of the topology after running the command:

$ hsh gh1 ping -c 1 10.0.0.24
OpenDaylight network view after one ping

OpenDaylight network view after one ping

When we ping to and from every host using the pingall command, OpenDaylight learns about all hosts in the network:

OpenDaylight network view after pingall

OpenDaylight network view after pingall

This shows that OpenDaylight is working with OFNet.

Comparing controllers

OFnet’s traffic generation and monitoring features make it possible to characterize the performance of different network controllers or network applications. As an example, let us compare the monitored behavior of OpenFlow messaging generated by two different network controllers: Floodlight and OpenDaylight.

Here we use OFNet’s traffic monitoring function to look at the differences in behavior of the two controllers. This first dashboard shows the Floodlight controller’s performance measured over a period of five hundred seconds.

OpenDaylight network view after pingall

OpenDaylight network view after pingall

This next dashboard shows the OpenDaylight controller’s performance measured over a period of five hundred seconds.

OpenDaylight network view after pingall

OpenDaylight network view after pingall

Comparing these two results shows us that the two controllers seem to be operating very differently. For example: OpenDaylight seems to be able to offer network connectivity to all hosts while generating significantly fewer OpenFlow messages and a much smaller number of flow table entries.

I cannot confirm that the monitoring is accurate. For example: the OFNet ofevent and fstate commands could not interpret OpenFlow messages sent by OpenDaylight so maybe the OFNet traffic monitoring feature is also misinterpreting OpenFlow traffic for OpenDaylight. More analysis is needed to make a real conclusion.

The above example shows the potential to use OFNet to characterize controller behavior and performance.

Conclusion

OFNet is a new project designed to emulate SDN networks. It provides visual presentations of control plane transactions. OFNet offers a virtual machine image you can try. OFNet is an open source project and the source code is included on the OFNet VM, in the ~/ofnet/src directory.

The OFNet project needs better documentation. I e-mailed the developer to understand the status of the project and he says it is an active project and that updates to the project web site will be coming in the future.

DNS and BIND demonstration using the Cloonix network emulator

$
0
0

The Domain Name System (DNS) is a fundamental Internet technology. Network emulators like Cloonix offer a way for researchers and students to experiment with the DNS protocol and with the various open-source implementations of DNS, such as BIND.

In this post, I will install Cloonix from the Github source code repository. I will run the Cloonix DNS demo script to create a simple DNS scenario and then run some experiments with DNS. Along the way, I will demonstrate some of the new Cloonix version 33 features.

Cloonix version 33

In this demonstration, I am using Cloonix version 33. I last used Cloonix when it was at version 29 and version 33 offers some significant changes and improvements. Compared to version 29, the major changes in version 33 are:

  • The Cloonix source code is now hosted on Github
  • The cloonix-ctrl commands have been renamed to cloonix-cli
  • The Cloonix lan object is now much simpler
  • Cloonix adds a simple GUI called cloonix_zor for managing Cloonix servers that have been started
  • The nat object replaces the cloonix slirp LAN
  • New demo scripts have been added. One of which, the DNS demo script, we will use in this demonstration

Using Cloonix version 33

If you are not already familiar with using Cloonix, please see my previous posts about using Cloonix. Commands may be slightly different in my older posts but they should still provide you with the information you need to use Cloonix version 33 commands and the Cloonix GUI. Also, please refer to the Cloonix documentation.

Install Cloonix version 33

The Cloonix source code is now hosted on Github. You will need to install git on your system if it is not already installed. To Install Cloonix 33, run the following commands:

cd ~
git clone https://github.com/clownix/cloonix.git
cd cloonix
sudo ./install_depends build
./doitall

New KVM virtual machine images have been created for Cloonix version 33. We’ll download the Debian Jessie filesystem for our DNS experiments.

cd ~
mkdir cloonix_data/bulk
cd cloonix_data/bulk
wget http://cloonix.fr/bulk_stored/jessie.qcow2.xz
unxz jessie.qcow2.xz

Updating Cloonix

The Cloonix development team is very responsive to bug reports. If you find an issue, they will fix it. To install the fix, you need to pull down the changes and reinstall Cloonix.

To reinstall Cloonix, run the following commands:

cd ~/cloonix
git pull
sudo ./allclean
./doitall

Cloonix demo scripts

The Cloonix development team created demo scripts to set up network emulation scenarios that demonstrate different network functions. Look in the demo folder on the Cloonix web site to find demo scripts.

Demo scripts are shell scripts that run cloonix CLI commands to build complex network topologies and to configure the nodes in the created topologies. Demo scripts are small and easy to share because they are just text files. The scripts will use filesystems stored in the ~/cloonix_data/bulk directory so you must download the necessary filesystems separately before running the demo scripts. You may view demo scripts in a text editor and see which filesystems you need to download to support the script.

You may also edit the scripts to modify their behavior. For example: if you were using these scripts in a classroom lab, you might consider modifying the path to the filesystem in the script to point to filesystems that are stored on a local server so students do not need to download any additional files manually.

DNS and BIND demo script

We will use Cloonix to demonstrate the configuration, operation, and troubleshooting of DNS in an emulated network scenario. The Cloonix DNS demo script will set up seven VMs using the jessie.qcow2 filesystem we downloaded earlier, configure them as either DNS servers or as standard clients and servers, and then use DNS to find the IP address of a remote server and start data flow between a client VM and a server VM in the same domain.

After we set up the demo, we will do some troubleshooting and also show how to add hosts to the network so they can be found via DNS.

The DNS demo script is available on the Cloonix web site. Download and run the script.

wget http://cloonix.net/demo_stored/dns.tar.gz
tar -xvf dns.tar.gz
cd dns
./dns.sh

You must be connected to your local network and have access to the Internet to run this script successfully because it will install software — bind9 and dnsutils — from the Debian Linux repositories onto the VMs it creates. The script copies the jessie.qcow2 filesystem we downloaded earlier to create two new filesystems: dnsutils.qcow2 and bind9.qcow2. Then it installs software on these new filesystems and uses them as the base filesystems for creating the VMs in the demonstration scenario.

When the script completes execution, you should see a Cloonix GUI window with six nodes:

Also after the script completes, it starts pinging from client to server.cloon.net, demonstrating that DNS is working correctly.

Press Ctrl-C to stop the script and the ping commands.

Experiments with DNS

Now let’s experiment to see how DNS is configured in this network.

We expect that the first time we try to access server.cloon.net, a request will be sent to the cloondns name server, which will query the netdns name server, which will query the rootdns name server. After that, the DNS information will be cached in the chachedns server so future requests will query the chachedns server (until the chached record times out in ten seconds).

Double-click on the client VM to open a terminal window. Then run the commands:

client# dig +short
rootdns.
client# dig rootdns +short
20.0.0.1

We see the root server is named rootdns and is at IP address 20.0.0.1. Next, let us find the IP address of the server VM. it is in the cloon.net domain:

client# dig server.cloon.net +short
24.0.0.1

We see that the IP address for server.cloon.net is 24.0.0.1.

Now, let’s view the DNS messages that the client VM exchanges with the name servers to determine the IP address of server.cloon.net. First, we add a sniffer to the Cloonix topology. Right-click on the Cloonix GUI and select snf from the menu.

Then connect the new snf object to the lan3 object so it is monitoring traffic on the link to the cachedns name server. Then, start data capture by double-clicking on the snf object so it turns red. It is now capturing network traffic to a pcap file.

Next, ping from client to server. This will cause the client VM to request the IP address of the server VM from its defined name server, cachedns. Now open Wireshark to view the contents of the sniffer’s pcap file. Right-click on the snf object and select Wireshark from the menu:

We see in the packet capture file that the cachedns server receives a query, then queries rootdns, netdns and cloondns in turn to determine the IP address of server.cloon.net, and sends that record to client so it can ping the correct IP address.

If you want to stop data capture, double-click on the snf object again so it turns green.

Add a new node to the network

Cloonix allows users to add VMs while the network emulation is running. We can add a new VM, configure it as a server named server2, and connect it to the same LAN as the original server named server. Then we can show how a network administrator would set up DNS to allow server2 to be found on the network by using its hostname.

First, we need to set up the Cloonix VM template to use the correct configuration. Right-click on the Cloonix GUI and select kvm_conf from the menu:

In the VM configuration window that appears, name the VM “server2”, uncheck the Append: end number box, change the number of CPUs to 1, the RAM to a number that is appropriate for your system, set the Rootfs to use the dnsutils.qcow2 filesystem and set the number of Ethernet ports to “one”. The configuration window should look the same as below:

Click “OK” on the VM configuration window. Next, add the VM by right-clicking on the Cloonix GUI and selecting kvm from the menu:

A VM named server2 will appear on the GUI.Next, add it to the same LAN as the VM named server by connecting it to the lan4 object. Double-click on lan4 so it turns pink, then click on port 0 on the VM named server2.

The completed network will look like the network below. We see server and server2 connected to cloud via lan4.

Configure server2 so it can connect to the LAN, has a default route, and has the correct hostname. Double-click on server2 in the Cloonix GUI to open a terminal window. In the server2 terminal window, enter the following commands:

server2# ip addr add dev eth0 24.0.0.3/24
server2# ip link set dev eth0 up
server2# route -n add default gw 24.0.0.2
server2# rm /etc/hostname
server2# echo "hostname" > /etc/hostname
server2# hostname server2
server2# echo "nameserver 23.0.0.1" > /etc/resolv.conf

Next add server2 to the cloondns zone file. Normally there should be one file with the DNS config but the Cloonix devs split up the config over multiple files referenced from the main file, which is a best practice for large DNS configuration files. The file with the server names in it is: cloondns.

cloondns# echo "server2      IN  A 24.0.0.3" >> /etc/bind/cloondns.db
cloondns# service bind9 restart

client# ping server2.cloon.net
PING server2.cloon.net (24.0.0.3) 56(84) bytes of data.
64 bytes from 24.0.0.3: icmp_seq=1 ttl=63 time=1.37 ms
64 bytes from 24.0.0.3: icmp_seq=2 ttl=63 time=1.93 ms

Now we have a new node and other nodes in the network can reach it at server2.cloon.dns. We demonstrated the steps required when adding a new node to a DNS domain.

DNS configuration files

As you do more experiments with this demonstration you may notice that there are some problems with the DNS configuration that need to be fixed by adding information to DNS configuration files on different nodes in the network.

For example, try to ping from server to client and see what happens. Open a terminal window on server and type the following command:

server# ping -c 1 client.cloon.net

This is an opportunity for you to exercise your knowledge of DNS and BIND configuration files to fill in the necessary information to make it possible to find IP addresses from any node in the cloon.net domain. I provide the necessary configuration information below.

Hint: use the dig command to check if a host name is in the DNS database, or if a name server is unavailable. You may use a zone file generator to create zone file examples you can use.

DNS configuration fixes

As we mentioned above, some configurations are missing in this DNS demo. I will describe how to complete the DNS configuration so that the machines server and client may find each other’s DNS records.

We see that we can ping server.cloon.net from any VM in the network. But what if we try to ping the client virtual machine? For example: what if I want to ping client.cloon.net?

client# ping client.cloon.net
ping: unknown host client.cloon.net

To fix this, add the client VM into the cloondns VM’s BIND config file. On the cloondns VM’s terminal, enter the command. Note that the cloondns name server’s named.conf file points to a set of files that split up the configuration into smaller files, and that the file that contains the list of hosts in the domain is the /etc/bind/cloondns.db file.

cloondns# echo "client       IN  A 25.0.0.1" >> /etc/bind/cloondns.db
cloondns# service bind9 restart

Now client can ping client.cloon.net — that is, ping itself — through the network. But, what if we ping the client VM from the server VM? We get another error. This time, server cannot find a nameserver from which it may request the IP address of client:

server# ping client.cloon.net
ping: unknown host client.cloon.net

We fix this problem by configuring the nameserver for the server VM, which should be the cachedns VM, which has IP address 23.0.0.1.

Enter the following command on VM server:

server# echo "nameserver 23.0.0.1" > /etc/resolv.conf

Now, we should be able to ping in either direction between client and server.

server# ping client.cloon.dns
PING client.cloon.net (25.0.0.1) 56(84) bytes of data.
64 bytes from 25.0.0.1: icmp_seq=1 ttl=63 time=1.00 ms

--- client.cloon.net ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.001/1.001/1.001/0.000 ms

The Cloonix_zor window

Cloonix version 33 now offers another GUI window called Cloonix Hypervizor that displays and manages Cloonix servers. It shows each cloonix object in a list view. It does not seem so useful right now, but might be helpful if you are managing large emulation scenarios with many nodes and other objects where the normal GUI might become difficult to read. It is also useful for managing multiple Cloonix servers.

Start the Cloonix Hypervizor window by running the commands:

$ cloonix_zor

The Cloonix Hypervizor window appears. It will show the currently-running Cloonix server or servers which, in this case, is nemo.

Expand the nemo server and all the objects under it to see every object currently running and managed by Cloonix:

You may interact with objects by expanding them to show available options and then clicking on an option. For example, you may start an SSH terminal connected to a node or you may kill a node. In our case, we want to kill the nemo server to stop the entire network emulation scenario. Click on the “kill” option under the nemo Cloonix server:

After clicking on “kill” you will be asked to confirm. Click on “Kill” in the dialog box that pops up.

The Cloonix graph GUI disappears and the Cloonix server nemo also disappears from the Cloonix Hypervizor window.

At this point, Cloonix is no longer running. You may stop the Cloonix Hypervisor by closing the window or you may start a new Cloonix server or run a new Cloonix demo script.

If you do not want to use cloonix_zor, then end the demonstration and shut down cloonix by running the following command on your host computer:

$ cloonix_cli nemo kil

Conclusion

We used a Cloonix demo script to build a network emulation scenario that demonstrates the operation and configuration of DNS name servers. We reviewed the changes in Cloonix version 33.

The Cloonix development team tells me they are planning to develop more demo scripts over the next few months. Demo scripts add to the utility of the Cloonix network emulator and are useful as either standalone scripts, or as examples to show users how to build complex demo scripts of their own.

How to set up the UNetLab or EVE-NG network emulator on a Linux system

$
0
0

EVE-NG and UNetLab are graphical network emulators that support both commercial and open-source router images. UNetLab is the current, stable version of the network emulator and EVE-NG is an updated version of the same tool, available as an alpha release. The UNetLab/EVE-NG network emulator runs in a virtual machine so it can be set up Windows, Mac OS, or Linux computers. Its graphical user interface runs in a web browser.

In this post, I will show how to set up an EVE-NG virtual machine on an Ubuntu Linux system. I’ll show the basic steps to creating and running a simple lab consisting of emulated Linux nodes. The procedure is the same for UNetLab.

Why EVE-NG instead of UNetLab?

EVE-NG is the new version of UNetLab. In addition to updating and re-working the software, the developers changed the project’s name.

At the time I wrote this post, UNetLab and EVE-NG are still very similar. However, the developers have stopped developing UNetLab. Any problems found in UNetLab will be only fixed in EVE-NG. Any changes made to the EVE-NG user interface and any new EVE-NG features will not be back-ported to UNetLab.

So, I decided to start my work with the UNetLab/EVE-NG network emulator by using the latest version, EVE-NG, even if it is still in alpha.

EVE-NG Overview

EVE-NG is a clientless network emulator that provides a user interface via a browser. Users may create network nodes from a library of templates, connect them together, and configure them. Advanced users or administrators may add software images to the library and build custom templates to support almost any network scenario.

EVE-NG supports pre-configured multiple hypervisors on one virtual machine. It runs commercial network device software on Dynamips and IOU and runs other network devices, such as open-source routers, on QEMU.

EVE-NG is the next version of UNetLab. It is an open-source project and the EVE-NG source code is posted on GitLab. At the time this post was written, the EVE-NG developers are raising funds to support ongoing EVE-NG network emulator development. They are also developing an EVE-Cloud hosted solution that (I assume) will allow users to pay for access in exchange for a hosted solution on a remote cloud server.

Since it runs in a virtual machine, EVE-NG may be set up on any operating system such as Windows, Linux, or Mac OS. In this post, I focus only on the specific issues related to getting EVE-NG working on a Linux system. For users of other operating systems, the EVE-NG development team provides good information on setting it up on Windows (Setup, Integration) or Mac OS (Setup,Intergration).

Set up Telnet, VNC, and Wireshark

When you click on nodes in the EVE-NG user interface, the browser will try to open a terminal window or a VNC window to connect to the node, or may run Wireshark to capture network traffic from one of the node’s interfaces. We need to set up the browser to use the custom protocol handlers used by EVE-NG, such as “telnet://* and capture://, so it can launch these programs with the correct parameters to support requests from the EVE-NG virtual machine.

As a first step, we will install applications that EVE-NG uses when interacting with nodes and we will integrate new protocol handlers into the Internet browser. In this case, I am using Firefox but this procedure should support other browsers, also.

This can be done manually by editing the protocol handlers in the browser’s configuration files but the easiest way is to use the UNetLab X Integration script created by Sergei Eremenko. Sergei’s script is available on GitHub.

To install the script, execute the following commands in a terminal window on your Linux computer:

$ wget -qO- https://raw.githubusercontent.com/SmartFinn/unetlab-x-integration/master/install.sh | sh
$ sudo usermod -a -G wireshark $USER

The script installs telnet, wireshark, and a VNC viewer. It also configures the new protocol handlers.

Log out and back in again to make your changes take effect.

Install VMware Workstation Player on Ubuntu Linux

EVE-NG runs KVM/QEMU virtual machines inside a another virtual machine running on our host computer. This nested virtualization setup is not supported by VirtualBox — my usual VM manager. So I used VMware Player, the solution recommended by the EVE-NG developers.

Download the latest version of VMware Workstation Player from the following URL: http://www.vmware.com/go/tryworkstation-linux-64. In this example, we download the file VMware-Player-12.5.1-4542065.x86_64.bundle.

Make the file executable, then run the file to install VMware.

$ cd ~/Downloads
$ chmod +x VMware-Player-12.5.1-4542065.x86_64.bundle
$ ./VMware-Player-12.5.1-4542065.x86_64.bundle

The VMware install wizard will start. Accept the license terms and follow the prompts to install VMware.

When you get to the license window, select the button that you agree to use VMware Player for only non-commercial use.

Follow the prompts until the VMware installation is completed. Start VMware player from your application launcher or enter the following command in the terminal:

$ vmplayer &

The VMware player window opens up. We can now download and import the EVE-NG virtual machine in VMware Player.

Download the EVE-NG virtual machine

Download the EVE-NG OVA file from the EVE-NG Downloads page. In this case, we downloaded the file EVE-ALFA.ova. It is over one Gigabyte in size so it may take a long time to download.

The EVE-NG virtual machine image is hosted in two locations: a server in Russia and on the MEGA download site. I’m not sure about the security issues related to either site but I downloaded EVE-NG from the mail.ru server in Russia and I did not experience any problems afterward.

Set up EVE-NG VM in VMware Player

Next, Create a new virtual machine in VMware Player. Click on the Open a Virtual Machine button in the player window.

A file browser window will appear. Navigate to the folder containing the EVE-NG OVA file, select the file and click on the Open button.

This opens the Import Virtual Machine window. Give the virtual machine a name and click the Import button.

VMware Player will import the EVE-NG appliance and it will appear in the player window.

Next, click on the Edit virtual machine settings button at the bottom of the window.

Set up the virtual hardware. In the VMware Player settings window, make the following changes:

  • Memory
    • Set the VM memory size to 4GB
  • Processors
    • Check the Virtualize Intel VT-x/EPT or AMD-V/RVI box to enable nested virtualization
    • set the number of processors to match the number of physical cores in your system. In my case, I have a dual-core Intel Core i5 processor so I set the value to “2”.
  • Network Adapters
    • Configure the first network adapter as “NAT”
    • Remove unused network adapters

Now your VM is configured. Click the Save button to complete the setup process.

Start EVE-NG VM for the first time

Start the EVE-NG virtual Machine in VMware Player. The VMware Player console will show the EVE-NG login prompt. The default root password and the virtual machine’s IP address are displayed above the login prompt. This is very important information. The root password is used to log in for the first time. The IP address displayed is the EVE-NG virtual machine’s IP address. We will use this address to connect to the VM via SSH and HTTP.

In the VMware Player console window, log into the EVE-NG virtual machine using root password displayed on the screen. In this case, the userid is root and the password is eve.

Next, enter in the information requested by the EVE-NG setup script. First, you need to create a new root password. You’ll need to also enter it a second time to confirm it.

Choose a hostname for the EVE-NG VM. I chose eve-ng.

Use the default setting for the interface IP address. In this case, it is dhcp.

You may leave the NTP address blank. I did.

Choose the appropriate value for the method the VM will use to connect to the Internet. In some cases you may need to configure a proxy. In my case, I chose Direct connection.

The EVE-NG setup script will complete and the VM will now start. First, you’ll see the EVE-NG splash screen. This appears every time you start EVE-NG.

Then the VMware Player console will show the EVE-NG login prompt. Again, look at the IP address displayed above the login prompt. Use this address to connect to the VM via SSH and HTTP. Note that the default root password still appears but it is no longer correct, since you changed it.

I use a terminal window to connect to the EVE-NG VM because the VMware Player console does not support copy-and-paste, or other useful terminal functions. To connect to the EVE-NG virtual machine, Use the ssh command and connect to the root account at the IP address displayed in the VMware console window.

In my case, the command would be:

$ ssh root@172.16.66.128

Then log in with the root password you defined when you started EVE-NG for the first time. Now we are logged into the EVE-NG VM on a terminal window. We’ll use this terminal window to set up images and make configuration changes in EVE-NG.

The next step is to update EVE-NG to the latest version. To update EVE-NG, run the following commands in the EVE-NG terminal window:

# apt-get update
# apt-get upgrade

After upgrading, connect to the EVE-NG graphical user interface using a browser. I used Firefox. I started Firefox and entered the IP address that appears in the EVE-NG console window: http://172.16.66.128.

The default userid for the graphical user interface is admin. The password is unl.

  • Userid = admin
  • Password = unl

Now the EVE-NG graphical user interface appears. Congratulations! you have set up the EVE-NG virtual machine on a Linux system.

In the rest of this post, we will verify that EVE-NG works with open-source images by adding a Linux image and creating a simple network topology with the new image. I’ll also demonstrate that the Firefox browser can open the protocol handlers launched by EVE-NG to open VNC connections to the Linux nodes and to open Wireshark to capture network traffic.

Add images to EVE-NG

EVE-NG does not come with images already provided. Users must find software images to support the nodes that will run in the EVE-NG network emulation scenario they wish to create. Users must download images to directories on the EVE-NG virtual machine.

EVE-NG comes with templates configured to support various commercial routers and network appliances and also provides templates for a few open-source alternatives such as VyOS and Linux. Templates are stored in the folder /opt/unetlab/html/templates/. The default Linux template provided by EVE-NG supports Linux nodes as simple end-points on a network to emulate users or edge devices, specifies a Linux system that boots from a CDROM or DVD image (an ISO file), and uses VNC to connect.

EVE-NG will support full-featured Linux nodes with persistent file systems but it requires the user to create custom templates and to take a few extra steps which, to keep this post at a reasonable length, I will discuss in a future post. For now, we’ll use the default Linux template and download a Linux ISO file that is compatible with it. In this case, I chose to use Puppy Linux.

First, create the directory in which you will store the image. EVE-NG requires the name of the directory is in a specific format. Each image is stored in a sub-directory of the /opt/unetlab/addons/qemu/ directory. The image’s directory name uses a naming convention that has a prefix that matches the template name, followed by some additional text chosen by the user to uniquely identify the image.

For example, each image compatible with the “Linux” template must be stored in its own directory and the directory name must start with “linux-“. The dash is important. The unique text that completes the directory name must follow the dash. In my case, I named the directory “linux-puppy”.

In the EVE-NG terminal window, enter the following command:

# mkdir -p /opt/unetlab/addons/qemu/linux-puppy

Next, go to that directory and download the Puppy Linux ISO:

# cd /opt/unetlab/addons/qemu/linux-puppy
# wget http://distro.ibiblio.org/puppylinux/puppy-tahr/iso/tahrpup64-6.0.5/tahr64-6.0.5.iso

Change the name of the file to cdrom.iso. EVE-NG expects that every ISO image will be named cdrom.iso.

# mv tahr64-6.0.5.iso cdrom.iso

Finally, ensure that permissions are set up correctly. EVE-NG provides a script to fix any permission problems. You should run this whenever you add a new image:

# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions

Now the image is set up and we can use it in EVE-NG.

Create a new project

Now that we have an image prepared to use in EVE-NG, go back to the browser window displaying the EVE-NG user interface.

First, I will create a new folder. This is optional and you may create labs in the root directory if you want to.

The EVE-NG user interface main window displays a file manager. To create a new folder, enter the new folder name in the text box and click on the green Add folder button. In this example, I am creating a folder named brian.

We see the new folder in the EVE-NG user interface. Open the folder by double-clicking on it. Next, select the Add new Lab icon from the tool bar:

A dialog box will open requesting information about the lab. Here you must enter the lab name and the version. Other optional fields are available, such as the Description field, so you can provide information to future users who may open this lab file.

Click on the Save button to create the lab file. You will see the lab file appear in the EVE-NG window. Click on the file to see a thumbnail image of the lab topology — which is just a blank page right now — and the lab description if you populated that field in the lab configuration.

Below the lab thumbnail and description is a set of buttons that allow a user to manage the lab. You may open the lab, edit the lab information, or delete the lab file.

Click on the Open button. The EVE-NG network topology will appear. Currently, we have nothing configured in the topology so it is blank. We use the toolbar on the left to create and manage elements of the lab topology.

To add a new node, click on the Add an object tool — the “plus” sign at the top of the tool bar — and then select Node from the menu that appears.

Move the node to your preferred location in the window and then click to configure it. The Add a New Node dialog box appears. You may select the template you wish to use. Scroll down and select the Linux tempate.

The template form appears. Here we can add details about the nodes we will create. In this case, we chose the linux_puppy image we added earlier and reduced the RAM to 512 MB. We also set the Number of nodes to add field value to “2”. This will create two nodes at the same time.

After clicking on Save, two nodes appear on the EVE-NG canvas. Notice that each one uses the name configured in the previous form with a number appended to it. Arrange the nodes as shown below.

Add a link between both nodes. Use the Connect node tool from the toolbar.

The Connect node tool will now be colored red. Now click on each node and select the interfaces that will be connected together. For example: we click on the node named Linux1 and delect interface e0.

Then we click on the node named Linux2 and select interface e0.

Now we have a connection between the two nodes.

Click on the Connect node tool again to de-select it. It will turn gray again.

To start the network emulation scenario, click on the More actions tool and select Start all nodes.

The little square symbol (the “stop” icon) beside each node’s name should change to a sideways triangle (the “play” icon). Now both nodes are running — but it may take a few more seconds for them to complete their boot processes.

Connect to nodes

Now that the two Linux nodes are running, we need to connect to the user interface on each one so we can configure it and execute commands.

To connect to a node, click on it in the EVE-NG window. A new window opens that asks you which application should connect to the node. Choose the Remote Desktop Viewer, which will open the VNC application.

A VNC window will appear that displays the desktop on the node. Click on the other node to view its desktop, also. Now you can run applications on each node.

However, there is a problem. The mouse pointer in the VNC viewer does not track exactly with the mouse inputs from the host computer. This makes it difficult to click on icons or toolbars in the Puppy Linux desktop on either node. I could not find a fix for this issue but it seems to be a known problem.

Two mouse pointers? VNC mouse pointer does not track host mouse pointer exactly.

One workaround is to reduce the size of the desktop in the VNC window. The smaller desktop makes it easier to match the mouse in the VNC window with the mouse input from the host computer. To do this, launch the terminal window on the Puppy Linux desktop of each node.

It may be difficult to click on the Console icon because the mouse will not cooperate. You may also open a terminal window in Puppy Linux by right-clicking anywhere on the desktop and selecting Utility –> Urxvt terminal emulator from the menu.

In the terminal window on each node, enter the following command:

# xrandr -s 800x600

Now the desktop is much smaller and you can realign the mouse pointer by moving off the edge of the screen. This does not solve the mouse pointer issue but it does make it easier to work with it.

Capture data

To launch Wireshark and capture data on the network interfaces in the EVE-NG emulation scenario, right-click on a node and then select the interface from which you wish to capture data.

A new window will appear asking you which application should be used. Select UNetLab-X-Integration.

An OpenSSH window appears and asks for the EVE-NG VM’s root password. Enter the root password you configured previously in the section titled Start EVE-NG VM for the first time.

Now go back to the VNC viewer and enter the following commands in the terminal window in each node.

On Linux1:

# ip addr add 192.168.1.100/24 dev eth0 broadcast 192.168.1.255
# ip link set eth0 up

On Linux2:

# ip addr add 192.168.1.101/24 dev eth0 broadcast 192.168.1.255
# ip link set eth0 up

From Linux2, ping Linux1. Execute the following command on Linux2:

# ping -c 1 192.168.1.100

Now you should see some packets appear in the Wireshark window, This simple scenario is not very interesting but it shows that packet capture is working.

Stop lab

Now we are done testing the EVE-NG virtual machine. We will stop the lab and exit the virtual machine.

To stop the running nodes, click on the More actions tool in the toolbar and select Stop all nodes.

Next, close the lab by clicking on Close lab in the toolbar.

Now that we have some nodes configured in the lab, EVE-NG displays an image of the lab topology in the file manager window. This helps us quickly identify labs when we have more than one lab in the file manager.

Quit the EVE-NG user interface by click on the Sign out button.

Finally, in the EVE-NG terminal window, shut down the virtual machine by running the command:

# shutdown -h now

Conclusion

We downloaded the EVE-NG VM and set it up in VMware Player. We integrated the EVE-NG media handlers with the Firefox browser. Then we demonstrated a very basic network topology using the QEMU hypervisor in EVE-NG.

To make further use of network nodes powered by open-source software in the EVE-NG network emulator, we need to explore more of the EVE-NG features. We will create custom node templates and build images running open-source software in EVE-NG. Finally, we will use those images to create more complex network topologies. I will cover these topics is a future post.

As a lower-priority followup project, I am investigating how to set up and run EVE-NG on a Linux system using only QEMU/KVM instead of the commercial VMware Player application. As far as I know, QEMU/KVM should support the nested virtualization features that EVE-NG requires. I will try to create a fully open-source tool chain when working in EVE-NG with open-source routers and network appliances.

Build a custom Linux Router image for UNetLab and EVE-NG network emulators

$
0
0

The UNetLab and EVE-NG network emulators can become powerful tools for emulating open-source networks. However, When first installed, they support Linux images only in a limited way. Fortunately, it is easy to extend UNetLab and EVE-NG to support powerful, general-purpose Linux router and server images.

In their default configuration, UNetLab and EVE-NG support Linux nodes running boot-able live CD disk images that offer a graphical user interface accessible via VNC. This is not suitable for emulating Linux routers or servers.

To fix this limitation, we will show you how to build a Linux router image for EVE-NG that boots from a virtual hard disk, can be accessed via Telnet to simplify configuration and management, and that has a persistent file system onto which we can install software and modify configuration files.

Add a custom Linux server image to UNetLab or EVE-NG by following the procedure below:

  1. Install a Linux server on a virtual machine on your host computer
  2. Start the new virtual machine and configure it so it is accessible via Telnet after it is moved into UNetLab or EVE-NG:
    • Install and enable Telnet
    • Add a serial interface
    • Add networking software
    • Stop the virtual machine
  3. Copy the new virtual machine’s disk image to the UNetLab or EVE-NG virtual machine
  4. Convert the disk image to QEMU format
  5. Create a custom UNetLab template file and a new config file
  6. Test the image and template

Alternatively, you may copy the installation disk ISO image to the UNetLab or EVE-NG VM and build the Ubuntu Server virtual machine disk image using QEMU commands. This would replace steps 1 to 4, above.

Install a Linux server on a virtual machine

In this case, we will install Ubuntu Server 16.10 in a virtual machine managed by VMware Player. We use VMware Player because we’re already using it to run the EVE-NG VM1 and it should work on any host computer’s operating system.

Download the currently-available Ubuntu Server ISO file — in this case it is ubuntu-16.10-server-amd64.iso — from the Ubuntu Server web site.

Open the VMware Player application and click on the Create a New Virtual Machine button. The New Virtual Machine Wizard window will appear.

In this example, we will use VMware’s Easy Install feature and then install OpenSSH manually after we create the VM. Click on the Use ISO image radio button to use Easy Install2.

To use Easy Install, click on the Use ISO image radio button and enter the path to the ISO file you downloaded earlier. Next, you will be asked to enter the userid and password you wish to set up on the new VM. Then it will ask you to enter the name of the new virtual machine. I chose to name it ubuntuserver.

The last step is very important: we must create a single virtual disk file for the new virtual machine. By default, VMware player will create multiple disk files. In the Disk Size window, be sure to select the Store virtual disk as a single file radio button. Remember, it is not the default option so you must select it yourself.

The wizard will complete the installation without any more user input required. The process should take a few minutes.

Start the new virtual machine and configure it

If you selected the option to Automatically start the virtual machine, it will start automatically. If not, start the virtual machine in VMware Player.

The VMware Player virtual machine console appears with a login prompt. Log in using the user id and password you specified during the installation process.

Install OpenSSH

Since we used VMware’s Easy Install function, OpenSSH is not yet available on the virtual machine.

Install OpenSSH. In the VMware Player console window, enter the following commands:

ubuntu:~$ sudo apt-get update
ubuntu:~$ sudo apt-get install openssh-server

Connect to VM from Terminal window

In the VMware console window, find the IP address assigned to the virtual machine. I used the ifconfig command. In my case, the assigned IP address is 172.16.66.131.

Minimize the VMware console window. We do not want to use it anymore. We will use a Terminal window.

Open a Terminal window and SSH to the VM. In my case, I installed the Ubuntu Server VM with a userid brian and VMware provided it with the IP address 172.16.66.131 so my SSH command looks like the one below. Both these values will be different for you.

t420:~$ ssh brian@172.16.66.131

Install Telnet on the Ubuntu Server VM

We need Telnet to interact with the EVE-NG network emulator but it is not installed or enabled in Ubuntu Server. We need to install and enable telnet on the Ubuntu Server.

To install Telnet:

ubuntu:~$ sudo apt-get install xinetd telnetd

To enable Telnet, create a telnet config file in the /etc/xinetd.d directory3:

ubuntu:~$ sudo nano /etc/xinetd.d/telnet

Add the following text to the new /etc/xinetd.d/telnet file:

service telnet
{
disable = no
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
}

Save the file and restart the xinetd service.

ubuntu:~$ sudo service xinetd restart

Add a serial port on the VM

When not using VNC, UNetLab or EVE-NG connects to nodes using a serial interface. To allow a node to connect with UNetLab or EVE-NG, we must enable the serial port ttyS0 connection on the Ubuntu Server VM4.

Execute the following commands in the Terminal window:

ubuntu:~$ sudo systemctl enable serial-getty@ttyS0.service
ubuntu:~$ sudo systemctl start serial-getty@ttyS0.service

NOTE: Ubuntu Server 16.10 uses systemd init system so the procedure above works in Ubuntu Server 16.10 or any other distribution that uses systemd. The procedure you use to add a serial interface to a Linux system may be different if you are using another init system. For example, if you are using a different Linux distribution or an older version of Ubuntu Server. Below, I list some links to other serial interface configuration methods for other init systems and Linux distributions:

If you are new to the various initialization systems used in Linux, see this nice summary of the differences between init, upstart, and systemd, or search Google for “comparison of init systems”.

Install networking software

Install the network services we may use in our UNetLab or EVE-NG labs: Quagga and Traceroute.

ubuntu:~$ sudo apt-get install -y quagga quagga-doc traceroute

It is best to install any software you think you may use in this image now, before you install the image in the EVE-NG or UNetLab VM.

Stop the VM

We have completed setting up the new VM. To stop the new VM, execute the following command in the VM console:

ubuntu:~$ sudo shutdown -h now

Copy the new Linux server disk image to the UNetLab or EVE-NG virtual machine

In this example, I am using EVE-NG but all the procedures below work the same in UNetLab.

Start the EVE-NG VM in VMware Player.

Open a new Terminal window and login to the EVE-NG VM using SSH. The EVE-NG VM’s IP address is displayed on the EVE-NG VM’s console window. In this case, it is 172.16.66.120.

t420:~$ ssh root@172.16.66.120

Now you are logged into the EVE-NG VM’s Linux shell. On the EVE-NG VM, create a new directory for the Linux Router image:

eve-ng:~# mkdir /opt/unetlab/addons/qemu/linuxrouter-ubuntu.server.16.10

Remember that the directory name must follow the EVE-NG naming convention. We plan to create a custom template that will use the name “linuxrouter” so the directory name must start with the suffix “linuxrouter-“. You may choose a different name if you wish but the template name and the directory name suffix must match according to the correct naming convention.

Assuming you named the VM you created ubuntuserver, the disk image is stored on your laptop computer in the directory $HOME/vmware/ubuntuserver. To copy the image to the EVE-NG VM, open another Terminal window on your host computer and run the following command in that terminal window:

t420:~$ scp $HOME/vmware/ubuntuserver/ubuntuserver.vmdk root@172.16.66.120://opt/unetlab/addons/qemu/linuxrouter-ubuntu.server.16.10

Convert the VMware disk image to a QEMU disk image

EVE-NG and UNetLab both require that disk images be in the QCOW2 format and that the disk image file be named hda.qcow2. We uploaded a VMware VMDK formatted disk image. We need to convert it.

To convert the VMDK disk image to a QCOW2 disk image, execute the following commands in the EVE-NG terminal window:

eve-ng:~# cd /opt/unetlab/addons/qemu/linuxrouter-ubuntu.server.16.10
eve-ng:~# /opt/qemu/bin/qemu-img convert -f vmdk -O qcow2 ubuntuserver.vmdk hda.qcow2
eve-ng:~# /opt/unetlab/wrappers/unl_wrapper -a fixpermissions
eve-ng:~# rm ubuntuserver.vmdk

Now we have a file named hda.qcow2 in the image directory named linuxrouter-ubuntu.server.16.10.

Create new template file

Every node type in EVE-NG has a template file that specifies node startup parameters such as node name, number of interfaces, which hypervisor supports the node, and the startup parameters used by the hypervisor.

When creating a new custom image, it is usually easiest to create a template file by copying an existing template that is similar to the type of node you will create, and then modifying the copy. We’ll create a new template by copying the linux template and naming the copy linuxrouter.

The template files are stored in the EVE-NG directory /opt/unetlab/html/templates. Copy the linux.php template to create a new template file named linuxrouter.php:

eve-ng:~# cd /opt/unetlab/html/templates
eve-ng:~# cp linux.php linuxrouter.php

Next, edit the linuxrouter.php file:

eve-ng:~# vi linuxrouter.php

Make the following changes to the file:

  • Change the ‘name’ value to ‘LinuxRouter’
  • Change the ‘icon’ value to ‘Router.png’
  • Reduce the ‘ram’ value to ‘1024’
  • Increase ‘ethernet’ value to ‘4’ (or higher)
  • Change ‘console’ value to ‘telnet’
  • Change ‘qemu_options’ value as follows:
    • delete ‘-vga std -usbdevice tablet -boot order=dc’
    • add ‘-serial mon:stdio -nographic -boot order=c’

The final linuxrouter.php file should look like:

<?php
$p['type'] = 'qemu';
$p['name'] = 'LinuxRouter';
$p['icon'] = 'Router.png';
$p['cpu'] = 1;
$p['ram'] = 1024; 
$p['ethernet'] = 4;
$p['console'] = 'telnet';
$p['qemu_arch'] = 'x86_64';
$p['qemu_nic'] = 'virtio-net-pci';
$p['qemu_options'] = '-machine type=pc-1.0,accel=kvm -serial mon:stdio -nographic -boot order=c';
?> 

Create the config.php* file

We need to add the template to the node-templates list in the EVE-NG initialization file, /opt/unetlab/html/includes/init.php, so it will appear in the EVE-NG user interface. However, the init.php file is overwritten whenever we update EVE-NG.

According to the comments in the init.php file, we can create a file named config.php in the same directory and add in our own configuration which will then be included in the EVE-NG configuration when init.php runs, and which will override the associated configuration in the init.php file. The config.php file will not get overwritten when you update the EVE-NG VM.

The template names are listed in the $node-templates array in the init.php file. Copy this array to the config.php file and then add the new linuxrouter template name to the array5.

In the EVE-NG terminal window, list the init.php file:

eve-ng:~# cd /opt/unetlab/html/includes/
eve-ng:~# cat init.php

Scan through the text output for a block of text starting with $node_templates = Array(. Copy the block of text starting with $node_templates = Array( and ending with the closing parenthesis and semicolon );.

Next, create and edit the file config.php:

eve-ng:~# vi config.php

If config.php did not previously exist and you are editing a blank file, add the PHP descriptors <?php and ?> to make the new config.php file a PHP file.

Paste the copied block of text into the file, then add the following line to the array:

'linuxrouter'   =>  'Linux Router'

If adding this line somewhere inside the array, add a comma , to the end of the line. If adding this line to the end of the array, add a comma , to the end or the previous line.

You may comment out lines related to templates you will never use. This makes the EVE-NG or UNetLab user interface a bit easier to use when adding new nodes. In my case, I commented-out most of the lines in the array, except for a few Linux-related templates.

In the version of EVE-NG I used when writing this post, the contents of the config.php file with the added linuxrouter line, and with unused lines commented out, will be:

<?php
  $node_templates = Array(
    //'a10'         =>  'A10 vThunder',
    //'clearpass'       =>  'Aruba ClearPass',
    //'timos'       =>  'Alcatel 7750 SR',
    //'veos'        =>  'Arista vEOS',
    //'barracuda'       =>  'Barraccuda NGIPS',
    //'brocadevadx'     =>  'Brocade vADX',
    //'cpsg'        =>  'CheckPoint Security Gateway VE',
    'docker'        =>  'Docker.io',
    //'acs'         =>  'Cisco ACS',
    //'asa'         =>  'Cisco ASA',
    //'asav'        =>  'Cisco ASAv',
    //'cda'         =>  'Cisco Context Directory Agent',
    //'csr1000v'        =>  'Cisco CSR 1000V',
    //'cips'        =>  'Cisco IPS',
    //'ise'         =>  'Cisco ISE',
    //'c1710'       =>  'Cisco IOS 1710 (Dynamips)',
    //'c3725'       =>  'Cisco IOS 3725 (Dynamips)',
    //'c7200'       =>  'Cisco IOS 7206VXR (Dynamips)',
    //'iol'         =>  'Cisco IOL',
    //'titanium'        =>  'Cisco NX-OSv (Titanium)',
    //'firepower'       =>  'Cisco FirePower',
    //'firepower6'      =>  'Cisco FirePower 6',
    //'ucspe'       =>  'Cisco UCS-PE',
    //'vios'        =>  'Cisco vIOS',
    //'viosl2'      =>  'Cisco vIOS L2',
    //'vnam'        =>  'Cisco vNAM',
    //'vwlc'        =>  'Cisco vWLC',
    //'vwaas'       =>  'Cisco vWAAS',
    //'phoebe'      =>  'Cisco Email Security Appliance (ESA)',
    //'coeus'       =>  'Cisco Web Security Appliance (WSA)',
    //'xrv'         =>  'Cisco XRv',
    //'xrv9k'       =>  'Cisco XRv 9000',
    //'nsvpx'       =>  'Citrix Netscaler',
    //'sonicwall'       =>  'Dell SonicWall',
    'cumulus'       =>  'Cumulus VX',
    //'extremexos'      =>  'ExtremeXOS',
    //'bigip'       =>  'F5 BIG-IP LTM VE',
    //'fortinet'        =>  'Fortinet FortiGate',
    //'radware'     =>  'Radware Alteon',
    //'hpvsr'       =>  'HP VSR1000',
    //'olive'       =>  'Juniper Olive',
    //'vmx'         =>  'Juniper vMX',
    //'vmxvcp'          =>  'Juniper vMX VCP',
    //'vmxvfp'          =>  'Juniper vMX VFP',
    //'vsrx'        =>  'Juniper vSRX',
    //'vsrxng'      =>  'Juniper vSRX NextGen',
    //'vqfxre'      =>  'Juniper vQFX RE',
    //'vqfxpfe'     =>  'Juniper vQFX PFE',
    'linux'         =>  'Linux',
    'mikrotik'      =>  'MikroTik RouterOS',
    'ostinato'      =>  'Ostinato',
    //'paloalto'        =>  'Palo Alto VM-100 Firewall',
    'pfsense'       =>  'pfSense Firewall',
    //'riverbed'        =>  'Riverbed',
    //'sterra'      =>  'S-Terra',
    'vyos'          =>  'VyOS',
    //'esxi'        =>  'VMware ESXi',
    'win'           =>  'Windows',
    'vpcs'          =>  'Virtual PC (VPCS)',
    'linuxrouter'       =>  'Linux Router'
  );
?>

You can see that the new linuxrouter template is listed at the end of the array, or wherever you chose to insert it.

Troubleshooting

If you find that EVE.NG or UNetLab does not work after editing either the init.php or the config.php file, you probably forgot a comma.

Ensure you add a comma at the end of every line in the array except the last line.

Finding new templates after updating EVE-NG

As I stated before, the init.php file may be overwritten by a new version when you update EVE-NG and the config.php file is not touched when you upgrade EVE-NG. This ensures that your changes are not erased after an update but, since the config.php file remains unchanged and since it overrides contents of the init.php file, you may miss new templates that have been added by an update in the new version of the init.php file.

After updating the EVE-NG VM, check the init.php file for any new templates and add them to your config.php file, if appropriate.

Test the new custom Linux Server image

We created a new template and image for the Linux Router node and modified the EVE-NG configuration files so we can access this new node type in the EVE-NG user interface. Now, let’s test that we can access the new node from the EVE-NG user interface by using Telnet.

Start up the EVE-NG user interface in your browser. Remember, we set up the EVE-NG virtual machine on a Linux host computer in my previous post.

Open a new project and add the Linux Router node.

The first thing we see is that the normally long list of templates has been reduced to a handful. This is because, in addition to adding the Linux Router template in the config.php file, we commented out many templates we don’t need to use right now.

The Linux Router template panel appears. We accept all the default settings as they are and click Save. Now we have a Linux Router node on the EVE-NG network topology canvas.

Start the node and wait a minute. It is running an Ubuntu Server image, which is a full-featured Linux server, so it will take about a minute to boot.

Click on the node in the EVE-NG topology. A Launch Application window will appear. Select UNetLab-X-Integration. Then, a Terminal window will open and you will see it connects to the node. Press the Return key to see the login prompt.

Now we know we can telnet to this node via its serial interface.

When we use this Linux Router image and template for future projects, we can copy configurations from text files and paste them in each node’s terminal window. This is more convenient than connecting to nodes using VNC.

Troubleshooting

If the login prompt does not appear in the Terminal window, then recheck your procedure. You may have a problem with how you configured the serial interface. This was the problem I encountered most often.

You can check the disk image on the EVE-NG virtual machine by booting a QEMU virtual machine from it on the EVE-NG virtual machine. Then you can access it via VNC and see what the problem is. You’ll need to set up bridges and networking so I think it is easier to go back to the Linux host computer and start the original image up again in VMware and fix it there, then copy the fixed image back to the EVE-NG VM.

Regardless whether you fixed the disk image directly in the EVE-NG VM or whether you copied a new image from your host computer to the EVE-NG VM, you need to ensure that you pick up the new disk image in the test project. To do this, delete the Linux Router node you added and add a new Linux Router node to test project. Otherwise, EVE-NG uses the snapshot of the disk image it created when the node was originally placed in the test project — which is based on the old version of the image that was not working.

Conclusion

We successfully created a Linux router image that can be accessed in EVE-NG via Telnet over a serial interface and installed it in the EVE-NG virtual machine. We created a new template that uses the Linux router image.

The same procedure will also work in UNetLab.


  1. We could also use VirtualBox or QEMU to create the new Ubuntu Server image. VirtualBox or QEMU will allow us to create an image in the QCOW2 format, which means we would not need to convert it after we copy it to the EVE-NG VM. 

  2. If you want more control over the installation process — for example, so you can select OpenSSH as part of the install process — choose the third radio button in this window, I will install the operating system later, which allows you to manually work through all the installation steps

  3. Reference: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/3/html/Reference_Guide/s1-tcpwrappers-xinetd-config.html 

  4. Reference: https://help.ubuntu.com/community/KVM/Access 

  5. Reference: http://noshut.ru/2015/09/adding-spirent-virtual-test-center-traffic-generator-to-unetlab/ 

Netdev 2.1 conference report

$
0
0

I attended the Netdev 2.1 Conference in Montreal from April 6 to 8. Netdev is a community-driven conference mainly for Linux networking developers and developers whose applications rely on code in the Linux kernel networking subsystem. It focuses very tightly on Linux kernel networking and on how packets are handled through the Linux kernel as they pass between network interfaces and applications running in user space.

In this post, I write about the three-day conference and I offer some commentary on the talks and workshops I attended. I grouped my comments in categories based on my interpretation of each talk’s primary topic. The actual order in which these topics were presented is available in the Netdev 2.1 schedule. The slides from the talks, workshops, and keynotes are posted under each session on the Netdev web site. Videos of the talks are available on the netdevconf Youtube channel.

The Netdev conference is the second part of a two-part conference. The first part was a private, invitation-only meeting called Netconf held in Toronto and the second part is a public conference called Netdev held in Montreal. I attended and presented a talk at Netdev in Montreal and wrote this report about that conference. You will find a detailed report on the Netconf conference held in Toronto at Anarcat’s blog.

Keynotes

Each day at the Netdev conference featured a keynote by a prominent member of the Linux networking community. Two of the keynotes covered higher-level views of Linux in the network in the enterprise, cloud, and the Internet of things. The other keynote covered details of the new eXpress Data Path (XDP) feature in the Linux kernel.

Day 1 Keynote: Linux Networking for the Enterprise

Shrijeet Mukherjee from Cumulus Networks presented a keynote about the state of Linux networking in enterprise networks. Shrijeet offered his view that Linux needs to be everywhere in the network, from the smallest host to the largest server, from the simplest switch to the most capable router. In an all-Linux environment, each network element may perform different functions but all network elements can be managed and operated using the same set of Linux tools.

Shrijeet then discussed new Linux networking projects released over the past three years that are making Linux more relevant to enterprise networks. For me, the two most interesting features were Free Range Routing (FRR) and the Network Command-Line Utility (NCLU).

In April 2017, a consortium of Linux networking companies released the Free Range Routing (FRR) project, a fork of the Quagga routing protocol suite. Shrijeet said that the Quagga development has gone stale. I noticed that even the Open Source Routing foundation, previously a strong supporter of Quagga, has moved to support FRR, instead. Hopefully, we’ll see some advancements in Linux routing protocol support coming from this project.

Shrijeet presented the Network Command-Line Utility (NCLU), developed by Culumulus Networks to provide a system-level command-line interface that allows all Linux networking systems to be configured from a single configuration file. It is a Python-based daemon that sits on top of all existing userspace Linux tools. I hope Cumulus will release it as open source.

Shrijeet also covered Linux networking tools used in the enterprise ecosystem: ONIE, which allows for remote booting of Linux-powered network switches; ifupdown2, a tool that manages thousands of interfaces in datacenters; ethtool, which reports detailed information about network interfaces; and SwitchDev, which provides a standardized programming interface to various hardware switches.

Shrijeet argued that developers should support the native networking functions provided by the Linux kernel and avoid implementing key network functions on other applications like Open vSwitch. Innovation in the Linux kernel benefits all users, while innovation in particular applications benefits only those who use those applications and complicates the integration of networking in a network that uses multiple networking applications.

Day 2 Keynote: XDP Mythbusters

David S. Miller, the Linux kernel networking maintainer, presented a keynote on eBPF and XDP titled XDP Mythbusters. A video of his keynote is available on netdevconf Youtube channel. I discuss more of the technical details of eBPF and XDP in the Talks and Workshops section, below. There were many other talks about eBPF and XDP at this conference.

David’s keynote provided an overview of eBPF and XDP. He covered some of the history of the Linux kernel so we could understand why these features were developed and why they are used the way they are. XDP is used to prevent DDos attacks from overloading the system’s CPU, to perform load balancing, to collect statistics about packets, to perform sophisticated traffic sampling, and even to perform high-frequency trading (although the folks that do that won’t tell the Linux kernel development team exactly how they use XDP).

Then, David debunked some of the myths about XDP. He emphasized that XDP is safe to use even though it allows users to run their own code in the kernel because XDP has a built-in verifier to ensure code is safe. He said that XDP will be as flexible as user-space networking implementations like DPDK — it seems to me that the Linux kernel developers feel a bit of rivalry with alternative networking stacks like DPDK — and he addressed some of the overlap issues between XDP, TC, and netfilter.

David compared XDP to Arduino, the popular open-source hardware platform. Both systems are similar in that developers build a program on some other system, compile it, and then load the resulting bytecode onto the target system — the XDP subsystem in this case — where it runs.

Day 3 Keynote: Linux and the Network

Jesse Brandeburg from Intel presented the Day 3 keynote about Linux and the network. His main theme was “Everything is on the network and the network runs Linux”. He argued that the Linux network stack is part of most networks running today. For example, Android smart-phones use the Linux kernel and, since there are billions of smart-phones, they create a large amount of traffic on the network. Linux also runs on wireless base stations in LTE networks, on edge routers, core routers, and on data center switches and servers.

Jesse pointed out that the Linux kernel is best resource for implementing secure networks because the Linux kernel is actively supported by a community of individuals and companies who are regularly improving it and making it more secure. He argued that roll-your-own networking code, such as an alternative networking stack implemented in user space, is harder to secure unless you are the best in the world.

He next discussed how the Internet of Things (IoT) will drive innovation in networking to an extreme degree. Types of network endpoint points could range from self-driving cars to sensors buried under a road, both of which have very different networking requirements and constraints like location and access to power. The Linux networking stack must innovate to support both the high bandwidth, low latency network required by self-driving cars and the very low bandwidth, high latency, and low power networking available to sensors embedded in roadways.

Jesse finished his keynote by describing the directions Intel is taking in Linux kernel networking development. Intel is actively promoting Switchdev as a standard because it provides similar interface for real and virtual hardware, simplifying development. Intel is also interested in hardware offload supporting higher-performance in Linux networking.

eBPF and XDP

The Netdev 2.1 conference featured a lot of talks, and even a keynote, on eBPF and XDP. Both topics also frequently appeared in other presentations related to Linux networking performance, filtering, and traffic control. Netdev conference presenters discussed eBPF and XDP so frequently that I was thinking the Netdev conference should be renamed the “XDP conference”. XDP will be a major factor in Linux networking in the near future, especially as Linux becomes the standard for networking equipment in datacenter networks supporting services based on NFV and SDN technology.

The Extended Berkley Packet Filter (eBPF) is a special-purpose virtual machine in the Linux kernel that was originally developed to support applications that could quickly filter packets out of a stream1. Over time, it has been adopted as a “universal” tool for loading and running compiled bytecode in the Linux kernel. High-performance networking users want to be able to load BPF programs to do fast packet routing, rewrite packet contents at ingress, encapsulate and decapsulate packets, reassemble large packets, etc.2

The eXpress Data Path (XDP) is built around low-level BPF packet processing3. XDP is focused primarily on improving Linux networking performance and so is the Linux kernel’s answer to the Data Plane Development Kit (DPDK), a project championed by Intel that provides high-performance networking features in userspace, outside the Linux kernel. XDP is still very new, with a lot of new development coming in the future.

As I mentioned in the Keynotes section above, David S. Miller presented a keynote on eBPF and XDP titled XDP Mythbusters. David offered the history of eBPF and XDP, what is it used for, and debunked some of the myths about XDP. To highlight how XDP works, he compared XDP to Arduino, the popular open-source hardware platform. Both systems are similar in that developers build a program on some other system, compile it, and then load the resulting bytecode onto the target system — XDP in this case — where it runs.

To me, the most interesting XDP presentation was a one-hour tutorial showing how to build a simple XDP program to perform simple DDoS blacklisting. In this presentation, XDP for the rest of us, Andy Gospodarek from Broadcom and Jesper Dangaard Brouer from Red Hat shared their own experiences getting started with eBPF and XPD and writing their own XDP application. Their presentation provided an excellent overview of eBPF and XDP and offers links to many resources. I recommend downloading their slides and viewing their presentation on YouTube. Also, see Julia Evan’s blog post summarizing this presentation.

Gilberto Bertin from CloudFlare gave a talk about how they use eBPF and XDP in their DDos mitigation service to detect and block attacks on their customers. For those who wish to explore eBPF, Cloudflare open-sourced a suite of tools called bpftools.

A team from Facebook presented how they use eBPF and XDP to stop DDoS attacks. Huapeng Zhou, Doug Porter, Ryan Tierney, and Nikita Shirokovgeneric presented a framework to implement bpf policers that drop packets at the the earliest stage in the networking stack before memory is assigned to process any packets, at line rate.

At some point in these presentations, someone (I cannot remember who) showed a slide that referred a list of Linux enhanced BPF (eBPF) tracing tools compiled by Brendan Gregg. I thought this list was very useful so I wanted to be sure to reference it in this commentary.

Also, in the Day 3 keynote, Jesse Brandeburg mentioned that a new xdp-newbies mailing list had been created for people new to XDP.

Congestion control in the Linux Kernel

When I speak to Linux networking experts, I get the impression that many of them think of the network as an end-to-end connection between two Linux nodes, with the black box in between. The black box has properties such as bit rate, delay, and bit error rates, and may contain some really annoying things like NAT that need to be accounted for in an application. How the black box works is not relevant to Linux kernel networking developers. Linux networking experts speak of the end-to-end principle and focus on end-to-end topics like TCP and UDP performance, instead of, for example, routing protocols.

Linux networking developers are primarily concerned with how to improve the end-to-end performance of applications utilizing the Linux networking stack. One important function of the Linux kernel networking subsystem that can have a major impact on an application’s performance over the network is congestion control algorithms. Linux developers create new congestion control algorithms or improve existing ones.

Also, Linux networking users are similarly interested in congestion control algorithms, but their focus is usually on qualifying the performance of available algorithms for their specific use-cases. So, they are interested in characterizing performance of the existing TCP congestion control mechanisms such as CUBIC and BBR.

At the Netdev 2.1 conference, Jae Won Chung from Verizon presented the results of testing his team performed to evaluate various TCP congestion control mechanisms in 4G and 5G wireless networks. They actually drove a car for 100 km along a highway from New Jersey to Massachusetts and made measurements of application performance using TCP congestion control mechanisms such as CCA (the default in Linux), CUBIC, and BBR. His presentation offered detailed results from the testing. Jae concluded that buffer bloat inside eNodeBs adds to round-trip time (RTT) and that BBR offers significantly better performance than CUBIC in that environment. Some of the discussion between attendees after Jae’s talk hinted at the challenges facing Linux networking in the mobile environment. As we move to higher-performance cellular systems, cells get smaller, increasing the number of hand offs between base stations, especially when driving on a highway. This creates networking challenges that the Linux networking community needs to address and it may be that TCP will be completely replaced with something else for devices in mobile networks.

Hajime Tazaki from IIJ discussed a system he created, using the Linux Kernel Library (LKL), to test kernel code by running it in user space to try kernel innovations like TCP improvements without using virtual machines. He asked if this would skew measurements for performance. While testing TCP BBR, Hajime found that there was a big difference in performance when using LKL instead of using the native Linux kernel in a VM. He discussed the problems he found and was able to increase performance of BBR in the LKL. His presentation demonstrated how using the LKL can make development and testing of new kernel innovations easier.

Alexander Krizhanovsky, founder of Tempestra technologies, presented a new tool to enable faster processing of HTTP traffic to support DDoS response and filtering. Alexander argued that, to better mitigate against TLS handshake DDoS attacks, HTTPS processing should be performed in the Linux the kernel. This fits in with the arguments made some of the keynotes, which proposed that networking innovations should be implemented in the kernel, and not in user space.

Network Virtualization

Network virtualization enabled the modern data center. Virtual machines and containers need to be able to send data to each other and to users so virtual machines need virtual network interfaces.

You have probably seen virtual network interfaces when setting up VMs on your own computer. Virtual machine managers like VirtualBox, VMware, and KVM will allow you to configure different types of virtual network interface cards (NICs) on your virtual machine such as e1000 or VIRTIO). There are also more types of virtual NICs that support VMs and containers in data centres. The different ways that these virtual NICs support complex new problems like migrating VMs between hosts during live operations is a very interesting topic that I had never thought about before. Additionally, Linux users may apply virtual networking technology to emulate complex networking scenarios.

At the Netdev 2.1 conference, a team from Intel presented the history of Network Virtualization and its future in Software and Hardware. Anjali Singhai Jain, Alexander H Duyck, Parthasarathy Sarangam, and Nrupal Jani offered a very interesting view of the different virtual interfaces (e1000, VIRTIO, etc.) available to virtual machines and containers running on a Linux host. They discussed the current state-of-the-art (SR-IOV) and future projects (VFIO mediated devices and Composable Virtual Functions) that Intel is exploring to further improve the performance and flexibility of networking between VMs and/or containers running on the same host and between VMs and/or containers running on different hosts in a data center.

Stephen Hemminger from Microsoft presented a very informative talk about network device names. The Linux kernel assigns device names when the devices are configured. Many Linux users do not know how the kernel assigns device names. For example, the init system you use — such as systemd, upstart, init, etc. — and the system bus your host computer uses determine the interface names assigned to devices. This is easily seen when working in virtual networks with different hypervisors. For example: When I run Ubuntu 16.04 in a VirtualBox VM, the network interfaces have names like enp0s8, and if I run the same Ubuntu 16.04 image in VMware, network interfaces have names like eth0. The device names are different because these two hypervisors emulate different system buses. I recommend this talk to anyone who manages Linux systems that have more than one interface (like Linux-powered Ethernet switches or routers) and anyone who builds virtual networks using Liniux virtual machines.

Alexander Aring from Pengutronix presented his 6LoWPAN mesh virtual network emulator. This was very interesting to me because it is a new network emulation platform that supports the emulation of low-powered devices on the 6LoWPAN mesh networking technology to connect to each other. This new emulator will enable researchers to investigate the bahavior or Internet of Things devices in a virtual environment. Alexander demonstrated a network emulation in which IoT nodes running RIOT-OS to connect to each other via a fake PHY using the FakeLB kernel driver.

I was given the opportunity to present a talk about investigating network behavior using Linux network emulators. It covered an overview of network emulators I have presented on my blog.

Networking performance Improvement

Presenters and attendees at the Netdev 2.1 conference were very concerned with improving the performance of the Linux kernel networking subsystem. In some cases, the topics of virtual network interfaces (in the section above) and networking performance improvement are closely related.

In these talks, the presenters discussed improvements, new Linux kernel features, and new memory access technologies. Some presenters discussed improving network performance by offloading packet processing and forwarding to other hardware in the system, allowing the network to access system memory directly. Other presenters focused on methods to improve the speed at which new network connections may be created or modified. And, others presented experimental results comparing the performance of different networking functions in the Linux kernel.

Eric Dumazet from Google presented a talk about a new method for scheduling networking workloads on the system CPU, called busy polling. Eric presented a very technical deep dive into how the Linux kernel receives a packet from the network interface and sends it to an application. Performance varies according to interrupt mechanism, host scheduling, and other factors. He proposed a way to speed up throughput, reduce latency and jitter, and achieve a better balance between networking performance and overall system performance. Busy polling sockets dedicate one CPU in a multi-core, multi-threaded system to handle network I/O, which reduces interrupts to all CPUs in the system, improving network performance where low-latency and low jitter are required.

Willem de Brujin from Google presented an extension to one of the Linux kernel’s copy avoidance systems. Linux copy avoidance mechanisms improve system efficiency by not copying network packets multiple times between different locations in memory while the processing the packet. He showed a performance improvement of up to 90% in some cases.

Alexander Duyck from Red Hat chaired a workshop on network performance in the Linux kernel. The workshop consisted of a series of short talks from different presenters. Alexander presented some tips for efficiently mapping memory in the Linux kernel. Jesper Brouer discussed the impact of memory bottlenecks on networking performance. John Fastabend and Bjorn Topel presented improvements to the AF_PACKET socket. A team from Mellanox presented a method to improve throughput by batching requests to network drivers.

Sowmini Varadhan and Tushar Dave from Oracle presented benchmark test results in relational database management systems (RBDMS). They discussed how improvements in the Linux kernel improve performance in a database system.

Jon Maloy from Ericsson presented a new neighbor monitoring algorithm added to Linux kernel to support inter-process communication in Linux clusters (Google it). He showed the new algorithm scales much better than previous methods, which is important in high-performance computing clusters consisting of hundreds of nodes.

Arthur Davis and Tom Distler from NetApp presented a new network configuration daemon for a storage network that will increase the reliability of data center networks. They said that they intend to release this software as open source in the future.

Routing and Switching

Even though much of the Netdev conference is focused on the Linux kernel, a number of topics addressed higher-level topics related to routing and switching in the network. As a non-programmer, these topics were especially interesting to me. I appreciated the opportunity to learn about how Linux supports the Internet of Things (IoT), routing in low-powered wireless networks, and network testing.

Andrew Lunn, Florian Fainelli from Broadcom, and Vivien Didelot from Savoir-faire Linux presented a refreshed approach to an older Linux technology, the distributed switch architecture. This feature was added to the Linux kernel about 10 years ago and languished, underused, for years until 2014, when developers found new uses for it and actively started improving it. Now, it is supported by a variety of commercially-available hardware switches and it can be found running on a variety of network equipment running Linux, from home and office routers to switches used in the transport industry. Distributed switch architecture (DSA) allows a CPU to manage a set of hardware switches. It seems DSA is an alternative to Switchdev.

Stefan Schmidt from Samsung chaired a workshop on IoT-related routing protocols. Stefan, Alexander Aring from Pengutronix, and Michael Richardson from Sandelman Software provided an overview of the various data transfer and routing challenges faced by networking developers as they create new applications in the Internet of Things. The main focus was on establishing common standards for IoT networking to improve the current situation, where there are too many vendor-specific solutions. They discussed protocols for routing and data transfer in low-power, lossy networks such as 6loWPAN, which is IPv6 over Bluetooth, RPL, also known as “ripple”, and an effort to re-start development of Mesh Link Establishment (MLE).

Tom Herbert from Quantonium presented an overview the issues related to real-time networking in the Internet of Things. He got a round of applause when he started by announcing his presentation was “not about XDP”. He discussed the use-cases for real-time networking in the IoT and pointed out the solutions enabled by, and challenges caused by, this new technology. For example, using inputs from a combination of sensors and cameras to identify a specific mobile phone user in a crowded public space, and providing real-time commands to fast-moving autonomous vehicles to avoid collisions. He also addressed the issues of security and spoofing in the IoT. This was a very interesting talk. I recommend viewing the video to get the full impact of the presentation.

Joe Stringer from VMware presented a talk about how Open vSwitch is implemented in the Linux kernel. He pointed out that Open vSwitch has a user space controller and a kernel-based flow switch. Other controllers can interact with the Kernel-based switch. He gave Weaveworks and MidoNet as examples. He covered Linux commands that interact with the Open vSwitch in the kernel, such as ovs-dpctl and conntrack-tools. He also covered Open vSwitch kernel improvements such as conntrack, and packet recirculation.

Lawrence Brakmo from Facebook presented a new tool for testing networks, the NEtwork TESting TOolkit (Netesto). It is a set of tools that run on hosts in a network and collect and display network performance statistics. Lawrence provided an example of using Netesto to evaluate the performance of TCP congestion control algorithms. Facebook has released the code as an open-source project. It looks like this would be an interesting application to evaluate in a network emulator.

Filtering and traffic control

Netfilter and TC have been integrated with the Linux kernel for a long time. Both offer a lot of functionality that most Linux users do not know about. The Netdev 2.1 conference offered sessions covering the technical details of filtering and traffic control. In addition, they discussed the new nf_tables function, which is intended to replace the ip_tables firewall in Linux.

Jamal Hadi Salim from Mojatatu chaired a traffic control workshop covering netfilter, tc offload to hardware, performance issues, new features, and testing. Unfortunately, I had to skip this workshop to get some other business done so I can’t say much about it. The conference organizers have posted a video of the traffic control workshop. The participants were Jiri Pirko, Eran Dahan, Rony Efraim, Kiran Patil, Roman Mashak, Lior Narkis, Madalin-Cristian Bucur, and Lucas Bates

Florian Westphal presented a discussion of tools that support the conntrack feature in netfilter.

Pablo Neira Ayuso, maintainer of netfilter project, chaired a netfilter workshop. He presented an in-depth overview of netfilter and nf_tables. Florian Westphal provided an overview of packet steering using nf_tables.

Arkadi Sharshevsky from Mellanox presented some new debugging functions to support troubleshooting and argued for a vendor-neutral approach to hardware abstraction.

Conclusion

The Netdev 2.1 conference was a very positive experience for me even though I am not a developer. I was, at first, a bit intimidated by the list of very technical topics offered at the conference. But, even though the last C code I wrote was over 20 years ago, I found almost all of the talks and workshops offered at Netdev 2.1 — even the ones focused on development topics that delved deep into Linux kernel — provided me with something useful to take away.

I realized that a lot of the work I’ve been doing on open-source network emulators and simulators is not so relevant to the kernel. Linux network emulators may, depending on how they are implemented, use different features of the kernel.

This conference inspired me to consider some next steps in my research (to be prioritized along with everything else). Some points I will consider for future investigation are:

  • Evaluate if XDP be emulated in a VM, or in a container.
  • Create a network emulation using only Linux commands, without using user space programs like network emulators.
  • Evaluate how each of the network emulators I write about relates to the Linux kernel networking subsystem. Highlight which ones are more appropriate for testing Linux kernel innovations like XDP or filtering, and which ones are better for user space innovations like routing protocols.

  1. From: Prototype Kernel web site, April 2017 http://prototype-kernel.readthedocs.io/en/latest/bpf/index.html#introduction 

  2. From: Extending extended BPF by Jonathan Corbet, July 2014 https://lwn.net/Articles/603983/ 

  3. From: Early packet drop — and more — with BPF by Jonathan Corbet, April 2016 https://lwn.net/Articles/682538/ 

Install the CORE Network Emulator from source code

$
0
0

To install the CORE network emulator in recently released Linux distributions, including Ubuntu 16.04 and later, I recommend that you install it from the CORE Github source code repository.

The Debian and Ubuntu maintainers will remove CORE packages from their repositories in the near future so we cannot install CORE using a package manager, anymore. We also cannot use the packages available on the CORE web site until a new version of CORE is released, because newer Linux distributions may break some of the functionality in the version of CORE packages available there. For example, CORE fails to start Quagga routing daemons in newer Linux distributions. The issue is fixed in the latest version of the CORE source code available on Github.

The CORE source code is in two places: on the CORE web site, and on Github. It’s not completely clear which source code repository we should use to build CORE from. I asked the CORE team about this and it seems that both are valid, but are not kept 100% in sync with each other. Since a recent fix I needed was on the CORE Github repository, but not in the CORE web site nightly snapshots source code folder, I will use the CORE GitHub repository.

In this post, I provide a detailed procedure to install CORE from the source code on Github, and to set up your system to run network experiments using the CORE network emulator.

NOTE: This post is a major update to an old post. The original version of this post was written in 2014. Since then, two new versions of CORE were released and the project source code moved to Github. So, I refreshed this post and set it to the top of my blog’s timeline (May 11, 2017).

Install CORE from Github

The latest version of CORE is available on Github. To install CORE, first install prerequisite packages that allow you to build the CORE system.

$ sudo apt-get update
$ sudo apt-get install git
$ sudo apt-get install bash bridge-utils ebtables \
  iproute libev-dev python tcl8.5 tk8.5 libtk-img \
  autoconf automake gcc libev-dev make python-dev \
  libreadline-dev pkg-config imagemagick help2man

Then, clone the CORE source code from Github and run the install scripts.

$ cd
$ git clone https://github.com/coreemu/core.git
$ cd core
$ ./bootstrap.sh
$ ./configure
$ make
$ sudo make install

Then, restart the system. For some reason the core_daemon service will not start unless you first restart the system.

To update CORE in the future

To get the latest patches and upgrade CORE, pull the latest version of CORE from Github and run the install scripts again.

$ cd
$ cd core
$ git pull
$ ./bootstrap.sh
$ ./configure
$ make
$ sudo make install        

Install network software

To perform experiments, we need to install the network services that may be run on the containers that emulate network nodes in CORE.

The following list of prerequisite software support CORE installation in Ubuntu Linux. For other Linux distributions, check the prerequisite software specified in the CORE installation documentation.

$ sudo apt-get install quagga quagga-doc \ 
  openssh-server isc-dhcp-server isc-dhcp-client \
  vsftpd apache2 tcpdump radvd at ucarp openvpn \
  ipsec-tools racoon traceroute mgen wireshark \
  iperf3 tshark snmpd snmptrapd openssh-client

Also, set up Wireshark so normal users can capture data.

$ sudo setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap
$ sudo adduser $USER wireshark

The logout and login again to activate these changes.

Set up Quagga

We’ll do our first experiments using Quagga with OSPF so set up the Quagga daemon config files:

$ sudo touch /etc/quagga/zebra.conf
$ sudo touch /etc/quagga/ospfd.conf
$ sudo touch /etc/quagga/ospf6d.conf
$ sudo touch /etc/quagga/ripd.conf
$ sudo touch /etc/quagga/ripngd.conf
$ sudo touch /etc/quagga/isisd.conf
$ sudo touch /etc/quagga/pimd.conf
$ sudo touch /etc/quagga/vtysh.conf                        
$ sudo chown quagga.quaggavty /etc/quagga/*.conf
$ sudo chown quagga.quaggavty /etc/quagga/*.conf
$ sudo chmod 666 /etc/quagga/*.conf

Edit the Quagga daemons file.

$ sudo nano /etc/quagga/daemons  

The new file should look like the listing below

zebra=yes
bgpd=no
ospfd=yes
ospf6d=no
ripd=no
ripngd=no
isisd=no
babeld=no

Set up environment variables so we avoid the Quagga vtysh END problem in Ubuntu Linux.

$ sudo bash -c 'echo "export VTYSH_PAGER=more" >>/etc/bash.bashrc'
$ sudo bash -c 'echo "VTYSH_PAGER=more" >>/etc/environment'

Run CORE

To test that the CORE Network Emulator is working, start the CORE daemon and the GUI.

First, start the CORE daemon:

$ sudo service core-daemon start

You may encounter an error at this point, if you did not restart after you installed CORE. If you see the error that tells you the file core-daemon.service does not exist, restart your system. After restarting, try starting the core-daemon service, again.

Then, run the CORE GUI

$ core-gui

This launches the CORE GUI. Note that you do not run the GUI as root.

Support

If you have questions, comments, or trouble, please use the CORE mailing lists:

Users should use the core-users mailing list:

Developers should use the core-dev mailing list but also send bug reports to the Github issues list.

Conclusion

We successfully installed the CORE Network Emulator from source code using a procedure that should work in most Linux distributions.

Set up a dedicated virtualization server on Packet.net

$
0
0

Packet is a hardware-as-a-service vendor that provides dedicated servers on demand at very low cost. For me and my readers, Packet offers a solution to the problem of using cloud services to run complex network emulation scenarios that require hardware-level support for virtualization. Packet users may access powerful servers that empower them to perform activities they could not run on a normal personal computer.

In this post, I will describe the procedure to set up an on-demand bare metal server and to create and maintain persistent data storage for applications. I will describe a generic procedure that can be applied to any application and that works for users who access Packet services from a laptop computer running any of the common operating systems: Windows, Mac, and Linux. In a future post, I will describe how I run network emulation scenarios on a Packet server.

Table of Contents

  1. Packet.net
    1. Controlling costs when using bare metal servers
    2. Create a Packet account and Login
    3. Create a project
  2. Generate SSH Keys
    1. Windows
    2. Mac
    3. Linux
    4. Copy public key to Packet.net
  3. Deploy a Server
  4. SSH Server on local machine
    1. Windows
    2. Mac
    3. Linux
  5. Set up the remote server
    1. Test X11 forwarding
  6. Create block storage
    1. Create a volume in your Packet portal
    2. Route the Volume to the Server
    3. Run the Attach Scripts
    4. Partition the Block Device
    5. Build the file system
    6. Mount the block storage file system
    7. Set up groups and group permissions
  7. Create user accounts
  8. Install applications and load files into block storage
  9. Shutting down
    1. Detach the block storage volume
    2. Delete Server
  10. Starting up again
    1. Deploy a server
    2. Configure the new server and storage
    3. Run startup script
    4. Conclusion

Packet.net

Packet rents dedicated, bare metal servers by the hour. As far as I can tell, Packet offers the best value dedicated servers, compared to other major providers like Amazon AWS. For example, when this blog post was written Packet offers a high-powered server — which they call a Type 1 server — with a 4-core Intel Xeon processor running at 3.4 GHz, 32 GB of RAM, 120 GB SSD and a 2 Gbps network connection for US$ 0.40 per hour.

Controlling costs when using bare metal servers

Packet charges users for servers when they are in any state other than deleted. If you leave a Packet server running while you are not using it, you are still paying for it and the costs will add up. Packet charges forty cents per hour for their Type 1 server so if you let that server run it will cost you $292 per month.

To save money, I will delete a server when I am not using it. However, I cannot save the server’s system state so when I delete the server I will lose all software I installed and all configurations I changed. I will also lose all the data I saved from my network emulation exercises. I need to be able to save data to persistent storage.

My solution involves using Packet’s low-cost block storage service to save critical data, configuration files, and startup scripts that I will use to rebuild a server when I want to restart a network emulation scenario. For my purposes, twenty Gigabytes of block storage is sufficient and twenty Gigabytes costs only one dollar and fifty cents per month. After that, I pay only forty cents per hour for the server time I use. I use the server only when I am actively running network emulation scenarios. As soon as I am done, I delete the server again, after saving all my results to block storage.

Create a Packet account and Login

To get started, you must create an account on the Packet.net web site and log in.

Packet web site login page

Click on the Signup button on the Packet.net web site and enter the requested information. Follow the instructions provided by the web site.

Create a project

After logging in for the first time, create a project. Projects allow you to groups servers and other resources together. Most importantly, projects manage the billing information for the resources used in the project. You can have different credit cards for different projects. You can invite other Packet.net users to use the same project, allowing groups to collaborate on the same project while consolidating billing to one account.

For an individual researcher like me, creating a project is just the next administrative step I need to take to work with Packet servers. I don’t worry about Packet’s more complex collaboration features because I don’t need to use them.

To create a project, click on the Create Your First Project button on the Packet.net web page.

Next, enter you billing information. Then scroll to the bottom of the page and click on the Create Project button.

Now we have created a project and are able to add servers and resources to it.

Generate and store SSH Keys

You must use SSH to access the terminal interface on a Packet server. Packet servers do not support passwords for root access1. You must provide an SSH key to get root access.

In this chapter, I show how to generate an SSH key pair on your local computer, then copy the public key to the Personal Keys section on the Packet web app. Every server you create will automatically have this public SSH key installed.

Microsoft Windows

In the most up-to-date Windows systems, you may have access to command-line tools that will generate SSH keys. But, most users will use the PuTTYgen application to generate SSH keys. PuTTYgen is part of the PuTTY suite of applications.

You may download the PuTTY installer from the PuTTY developer’s web site. Install it by double-clicking on the installer file and following the prompts.

Next, Start the puTTYgen application. All the default setting are OK. Click on the Generate button.

Follow the instructions to move the mouse over the PuTTYgen window to provide random inputs for the key generator. After a while, the key will be generated.

Click on the Save private key button. Choose the key file name. I chose to call it private-key.ppk. Remember to keep it in a safe location.

Copy the public key text that appears in the Public key for pasting into OpenSSH authorized_keys file field to your system clipboard.

You now have a private key file saved on your hard drive and the public key text is available in your clipboard, ready to paste to Packet’s web app.

Mac

On your Mac host computer, open a Terminal window and create an SSH key pair with the command:

$ ssh-keygen -t rsa

The tool will ask you for the file name of the key pair. In my case, I want to create a key pair named packet so I enter packet at the prompt.

Enter file in which to save the key (/Users/fake/.ssh/id_rsa): packet

The tool will ask you for a passphrase. I skip the passphrase by pressing the Enter key twice.

The ssh-keygen tool creates the SSH key pair and stores both the private and public files in the active directory, unless you specify the full path in when prompted for the file name. So it is a good idea to navigate to the ~/.ssh directory before running the command. In my case, the files are:

  • packet is the private key file
  • packet.pub is the public key file

List the files in the directory:

$ ls -l
total 12
-rw-r--r--  1 blinklet  staff  3061 11 Jun  2016 known_hosts
-rw-------  1 blinklet  staff  1675 27 Sep 17:54 packet
-rw-r--r--  1 blinklet  staff   401 27 Sep 17:54 packet.pub

Next, copy the public key to the clipboard so you can add it to the Packet.net system in the next step. In Linux, list the content of the public key file in the terminal, select the text and copy it to the clipboard. In my example, the public key is named packet.pub.

$ cat ~/.ssh/packet.pub
ssh-rsa AAAAxxxxThis-is-a-Fake-key-xxxxBAQDaUf4Z0W2xxxxThis-is-a-Fake-key-xxxxPluzzfoHYHA+LBe+Z8lgnVpgsxxxxThis-is-a-Fake-key-xxxxtbuGovSb3HWDJCf1BeCtZUCWmxxxxThis-is-a-Fake-key-xxxxx2LxxxxThis-is-a-Fake-key-xxxx7H7bwgBl+n72BikqtzjKZGo2xxxxThis-is-a-Fake-key-xxxxa+YyHyD0zzzzz4S4YH4ry6o4LWxxxxThis-is-a-Fake-key-xxxxF7JXIzP5xxxxThis-is-a-Fake-key-xxxxzzzhEKK7/3u7ki2zz2tsfakedRU3 fake@iMac.local

Linux

On your Linux host computer, open a Terminal window and create an SSH key pair with the command:

$ ssh-keygen -t rsa

The tool will ask you for the file name of the key pair. In my case, I want to create a key pair named packet so I enter packet at the prompt.

Enter file in which to save the key (/home/fake/.ssh/id_rsa): packet

The tool will ask you for a passphrase. I skip the passphrase by pressing the Enter key twice.

The ssh-keygen tool creates the SSH key pair and stores both the private and public files in the active directory, unless you specify the full path in when prompted for the file name. So it is a good idea to navogate to the ~/.ssh directory before running the command. In my case, the files are:

  • packet is the private key file
  • packet.pub is the public key file

List the files in the directory:

$ ls -l
total 12
-rw------- 1 ubuntu ubuntu  395 Sep  6 17:33 authorized_keys
-rw------- 1 ubuntu ubuntu 1679 Sep 25 13:31 packet
-rw-r--r-- 1 ubuntu ubuntu  403 Sep 25 13:31 packet.pub

Next, copy the public key to the clipboard so you can add it to the Packet.net system in the next step. In Linux, list the content of the public key file in the terminal, select the text and copy it to the clipboard. In my example, the public key is named packet.pub.

$ cat ~/.ssh/packet.pub
ssh-rsa AAAAxxxxThis-is-a-Fake-key-xxxxBAQDaUf4Z0W2xxxxThis-is-a-Fake-key-xxxxPluzzfoHYHA+LBe+Z8lgnVpgsxxxxThis-is-a-Fake-key-xxxxtbuGovSb3HWDJCf1BeCtZUCWmxxxxThis-is-a-Fake-key-xxxxx2LxxxxThis-is-a-Fake-key-xxxx7H7bwgBl+n72BikqtzjKZGo2xxxxThis-is-a-Fake-key-xxxxa+YyHyD0zzzzz4S4YH4ry6o4LWxxxxThis-is-a-Fake-key-xxxxF7JXIzP5xxxxThis-is-a-Fake-key-xxxxzzzhEKK7/3u7ki2zz2tsfakedRU3 fake@t420

Copy public key to Packet.net

On the Packet.net web app, go to the SSH Keys tab and click on the blue “plus” sign to add a new SSH key.

In the screen that appears, paste the text you copied from the screen during the key pair generation step above to the Key text field.

Give the key a title in the Title field. I chose to call my key Public-Key.

Then, choose the location. I chose to store this in Personal Keys so that I can always use the same key for multiple projects. My use-case is simple so I am keeping my key-management scheme simple.

Then, click the blue Add button. You will now see the your public key is saved in the Packet.net web app.

Deploy a Server

Use the Packet.net web app to deploy a new server. Click on the Manage tab and then click on the project name, then click on the Deploy Server button.

Next, enter the information about the server:

  • Choose a server name
  • Select the type of server. I chose the Type 1 “Workhorse” server
  • Choose the server’s operating system
  • Choose the server’s location. I chose to deploy the server in a location that also supports Packet’s Elastic Block Storage service

Then click the blue Deploy button. The server will take several minutes to start.

Since we also plan to use Packet’s elastic block storage (EBS) service, we need to deploy a server in a location where EBS is available. At the time this post was written, EBS is only available in Packet’s EWR1, SJC1, and AMS locations. In my case, I chose to use Packet’s New Jersey location, EWR1.

SSH client and X server on local machine

To connect to the terminal on the remote Packet server, we must use SSH. Since we plan to eventually run software the supports X windows, we also need to have an X server running on our local machine. When we log into the remote Packet server, we will enable X tunneling.

If you have a Windows computer, you need to install and run both an SSH client and an X server application. If you have a Mac computer, you already have an SSH client available but you still need to install an X server. If you have a Linux computer, you already have everything you need installed.

See below for the procedures required to configure an SSH client and X server in each of the three major operating system and to log into the remote Packet server with X tunneling enabled.

First, make a note of — or copy to the clipboard — the IPv4 address of the Packet.net server you deployed in the previous step. You will need it for the SSH configurations below.

Windows

In Windows, we will use the PuTTY SSH client. First, we need to set up the parameters in the PuTTY application and then save them for future use. Then we will use PuTTY to log into the remote Packet server.

Click on the Session tab in PuTTY. Enter the Packet server’s IPv4 address. All other settings are OK.

Next, click on the Connection tab. Enable keepalives by setting the Seconds between keepalives field to a value other than zero. I chose 20 seconds. All other settings are OK.

Next, click on the Data tab. Enter the userid root into the Auto-login username field. All other settings are OK.

Next, click on the Auth tab. Click on the browse button and navigate to the private SSH key file you previously generated. Select that key. All other settings are OK.

Next, click on the X11 tab and chck the Enable X11 forwarding check box. Enter the display localhost:0 into the X display location field. All other values are OK.

Finally, go back to the Session tab. Enter a name for this session in the Saved Sessions field and click the Save button. Now you can re-use all the configurations when you need to login to the Packet server.

To login to the remote packet Server, click on the session in the Saved Sessions box and click load. Note that, if you have deleted a server and are starting again with a new server, you will need to change the IP address.

Then click the Open button. If you are logging into a new server for the first time you will see a security alert. Click either Yes or No to proceed.

Now a terminal window will appear on your desktop. You have root access to your Packet server. and may now configure it to suit your needs.

Mac

Mac OS X does not have an X Server installed by default. You need to install the XQuartz X server. Follow the instructions available at https://www.xquartz.org/.

Next, run the SSH command to log into the Packet server. Use the -Y option to set up X forwarding on the SSH tunnel. For example:

$ ssh -Y -i ~/.ssh/packet root@147.75.73.83

Need to use -Y option to tunnel X windows because Mac OS X has more strict security defaults.

Now a terminal window will appear on your desktop. You have root access to your Packet server.

Linux

From your Linux host computer you may SSH to the Packet server using the ssh -iX command with the private key file, and the Packet server’s userid and IP address (or URL):

$ ssh -X -i /.ssh/packet root@147.75.79.221
root@virtual:~#

The X flag enables X forwarding on this SSH connection. Depending on the Linux distribution you choose to run on your Packet server you may need to modify some configuration files to enable X forwarding over an SSH tunnel. I found that the default configuration worked without any modifications when I used the Ubuntu 16.10 LTS operating system provided by Packet.

Now a terminal window will appear on your desktop. You have root access to your Packet server. and may now configure it to suit your needs.

Set up the Packet.net server

After logging into the remote packet server, make some initial configuration changes to set up an X11 client. I ran the following commands to set up my server for a few basic tests:

# apt-get update
# apt-get install -y xorg

Test X11

To ensure that the Packet server can host X applications, I test the SSH tunnel using the xeyes command. If SSH and X forwarding is configured correctly, the xeyes application running on the Packet server should open an X window on my local desktop.

For example, the screenshot below shows the xeyes window on top of the Terminal window on my Mac.

Quit xeyes by entering ctrl-c on the keyboard.

Create block storage

Packet offers a block storage that can be added to any deployed server. Block storage allows users to create disks that may save data for future use. As mentioned above, one way to minimize the costs of using Packet servers is to store data on a block storage volume and delete servers when they are not is use. The saved data may be used to speed up the configuration of a new server when you are ready to use Packet again. Maintaining block storage is much less expensive than keeping a server running.

Packet.net offers block storage in a few of its locations. See the packet.net web site for supported locations. For example, I am using the New Jersey location because it supports block storage.

For my research, I use the Standard Tier block storage service. Standard Tier costs about one one-hundredth of a penny per Gigabyte per hour. For example, 20 GB of storage will cost about $1.50 per month. Standard Tier disk performance, in terms of operations per second, is five times higher than the performance of a typical 7200 RPM HD so it is more than enough for my individual needs.

Create a volume in your Packet portal

Packet.net provides documentation describing how to set up block storage.

In the Packet.net web app, click on your project. On the project page go to the Storage tab and click on the green New Block Storage button.

On the next screen, choose the block storage size, performance tier, and location. Choose a smaller size to start. You can increase it later, but you cannot decrease it. I am starting with 20 GB.

Click on the blue Deploy button. You will see the new block storage volume created. Note the volume name; you will use it later.

In this example, the volume name is volume-6e56c556.

Route the Volume to the Server

Once a block storage volume is created, or if one is already available, it must be connected to a server in order to be used. Use the Packet web app to set up the network connection between the block storage volume and a Packet server.

First, click on the block storage volume. In this example we have only one volume named volume-6e56c556.

In the Storage Details box, select the Packet server in the Connected to field and click the small Attach button next to it.

Then, click the Save button at the bottom of the box. Now, the new volume is connected to the server.

This is like connecting a new storage device to a computer. The “physical” connection is completed but in the next step we still need to configure the server so it recognizes the new device.

Run the Attach Scripts

To complete the attachment process, log in to your Packet server via SSH, and run the Packet attach script. This script should already be installed on the server. If it is not installed, you may download the script from Packet’s GitHub repository.

To run the Packet attach script, execute the following command on the Packet server:

# packet-block-storage-attach -m queue

The command will output a few lines of text. The last line contains the directory that the server will associate with the block storage device. In this example, the output is:

Block device /dev/mapper/volume-6e56c556 is available for use

Note the block device directory from the command output. You will use it later. In this case, the block device directory is /dev/mapper/volume-6e56c556.

Partition the Block Device

Since this is a new device, the server cannot use it until we partition it and build the file system. We will use the fdisk utility to partition the device, the kpartx utility to update the partition map, and the mkfs.ext4 utility to build an ext4 file system in the new partition.

As we saw after executing the attach command, the block device is situated at /dev/mapper/volume-6e56c556.

Run the fdisk command on the new block device:

# fdisk /dev/mapper/volume-6e56c556

Respond to the fdisk utility’s prompts with the following commands:

  • Type n to create a new partition,
  • Type p to choose the primary type,
  • Press the Enter key three times to accept the default settings.
  • Type w to write and save the changes.

You may see a warning message when the fdisk utility quits. Ignore it.

Run the fdisk -l command to see the list of partitions. In the output of the command, under the Device section, you will find the new partition name. The new partition name in this example is /dev/mapper/volume-6e56c556-part1.

# fdisk -l

The Device section of the output will look like this:

Device                           Boot Start      End  Sectors Size Id Type
/dev/mapper/volume-6e56c556-part1      2048 41943039 41940992  20G 83 Linux

See a screenshot of the output below:

The server will not recognize the new partition because the Linux kernel has not updated its own partition file. Normally you might reboot the system to do this but we can avoid rebooting by running the kpartx -u command which makes the system re-read its partition table.

Update partition device mappings for the new partition:

# kpartx -u /dev/mapper/volume-6e56c556-part1

Build the file system

Create a file system on the new partition:

# mkfs.ext4 /dev/mapper/volume-6e56c556-part1

Mount the block storage file system

To use the newly-created file system we need to mount it on the Packet server. In this example, I chose to create directory named /mnt/disk1 and use it as the mount point. Execute the following commands to mount the new file system to /mnt/disk1.

# mkdir /mnt/disk1
# mount -t ext4 /dev/mapper/volume-6e56c556-part1 /mnt/disk1

Should we need to reboot the server, we would lose the mount point. We need to update the fstab file so the disk remounts if we have to reboot the server. Add a new line to the fstab file with the following command:

# echo "/dev/mapper/volume-6e56c556-part1 /mnt/block ext4 _netdev 0 0" >> /etc/fstab

Set up groups and group permissions

Remember, our plan is that we will save our data files in the block storage volume and attach it to a new server we create when we need to run a network emulation. Maybe we could invite another person to use the same infrastructure. That person might create a different user on their server.

To allow multiple users to write to the directories and files on the mounted block storage volume, create a new group to which we can add any new users. In this example. I chose a group name sims.

# groupadd sims

Next, use Linux file system access control lists to give any user belonging to the sims group access to files created by any other member of the sims group.

The easiest way to modify a file system’s ACL is by using the setfacl command. This is part of the acl package which is not installed by default. Install the acl package:

# apt-get update
# apt-get install acl

List the directory /mnt. We see the mounted block storage file system is owned by user root and group root.

# ls -l /mnt
total 4
drwxr-xr-x 3 root root 4096 Oct 24 19:15 disk1

Use the getfacl command to check the existing ACL of :

# getfacl /mnt/disk1
getfacl: Removing leading '/' from absolute path names
# file: mnt/disk1
# owner: root
# group: root
user::rwx
group::r-x
other::r-x

Next, use the setfacl command to allow any member of the group sims to write to the block storage file system mounted at /mnt/disk1 and then to make that setting the default setting for all files and directories created in /mnt/disk1:

# setfacl -m g:sims:rwx /mnt/disk1
# setfacl -dm g:sims:rwx /mnt/disk1

Check the results:

# getfacl /mnt/disk1
getfacl: Removing leading '/' from absolute path names
# file: mnt/disk1
# owner: root
# group: root
user::rwx
group::r-x
group:sims:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::r-x
default:group:sims:rwx
default:mask::rwx
default:other::r-x

We see that the group sims is has full permissions on /mnt/disk1 and is also had full permissions by default.

See how this works, create a file on /mnt/disk1 and list the directory:

# touch /mnt/disk1/test.txt
# ls -l /mnt/disk1
total 16
drwx------  2 root sims 16384 Oct 24 19:15 lost+found
-rw-rw-r--+ 1 root root     0 Oct 24 19:22 test.txt

The “+” sign in the directory listing shows we have set additional permissions on that file by default. In this case, any user who is a part of group “sims” should be able to write to that file or to any directory created by any other user on the attached volume.

Any new user we create, if the user is in the sims group, will be able to write to the file test.txt, above.

Create user accounts

When installing software and modifying configuration files, it is best practice to do so as a Linux user other than root and use the sudo command when one requires root privileges.

Create a normal user on this server and then use it to for normal network emulation activities such as installing software and running simulations.

First, create the user. In this example, I am using the username brian.

# adduser brian

Next, add the user to the group sims

# usermod -aG sims brian 

Next, make the new user a sudo user so that we can run commands with root privileges, when needed.

# usermod -aG sudo brian

Verify that the new user is a member of the groups sim and sudo.

# groups brian
brian : brian sudo sims

So we can log in to the Packet server using the new userid, copy the server SSH keys to the new user account. To create the files with the correct permissions, first change the user to the new user — in this case, brian:

# su brian
$

Now all commands are executed by userid brian.

Next, create the new user’s .ssh directory and copy the SSH keys from root to the new user’s directory. This will allow the new user to use the same SSH keys created by Packet when the server was originally created.

$ cd ~
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ sudo cat /root/.ssh/authorized_keys | tee /home/brian/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys

Log out of the server:

$ exit
# exit

Log back in as new user. Use your SSH client on your host computer with the new user id. See the instructions in the SSH client and X server on local machine chapter, above. Change the userid in that section from root to the new userid — in this example, change it to brian.

Install applications and Load files into block storage

At this point, you may configure the server to suit your needs. For example, you may install network emulation programs and file systems.

To make rebuilding the configuration easier after deleting a server, modify the configuration files so that data generated by your activity is saved, or backed up, to the block storage volume mounted at /dev/disk1.

List the files on the block storage volume with the commands:

# cd /mnt/disk1
# ls

In a future post, I will show how I installed and configured the Cloonix network emulator on a Packet server using block storage to greatly reduce the time required to rebuild a new Cloonix server whenever I want to run Cloonix on Packet.

Shutting Down

After using you Packet server to complete your work and after saving your data and any files you want to use again to the block storage volume mounted at /mnt/disk1, shut down the server so that you are not billed for running it while you are not using it.

To shut down, first unmount and detach the block storage volume. Then, delete the server.

Detach the block storage volume

Unmount the file system and detach the block storage volume from the server. It is important to properly unmount the file system to avoid the possibility of causing data corruption on it when you delete the server. Execute the command:

# umount /mnt/disk1

Next, detach the block storage volume by running the packet-block-storage-detach script:

# packet-block-storage-detach

You may need to wait about a minute before the device appears to be detached in the Packet portal.

Delete the Server

Delete the server on the Packet web app. Now you will no longer be charged by the hour for the server. You will continue to pay a very low fee for the block storage: US$ 1.50 per month in this case.

To delete the server, go to the Packet web app and select the project. Click the check-box next to the server you wish to delete, select the Delete action from the drop-down menu box below it, and click on the Apply button.

Starting up again

Imagine it is a few days later and you have some time to work on your project again. Now you need to start up a new Packet server, re-attach the block storage volume on which you saved your configurations and data, then re-install the software you need. To make this process faster, I recommend you write a startup script that you can save on the block storage volume.

Deploy a new server

To start a new server, follow the same steps I listed in the Deploy Server chapter above. Use the Packet.net web app to deploy a new server.

We already have a block storage volume saved in the Packet project. All our data files, configuration files, and startup scripts are saved on this volume. We need to attach the existing volume to the new server we deployed.

In the Packet.net web app, go to Storage tab and click on the volume you previously created. Follow the steps described in the Route the Volume to the Server chapter above. In this case, the volume is named volume-6e56c556.

In the Storage Details box, select the Packet server in the Connected to field and click the small ATTACH button next to it.

Then, click the Save button at the bottom of the box. Now, the new volume is connected to the server.

Configure the new server and storage

Now we rebuild our configuration on the new server. Mount the block storage volume and create a new user.

Log in to your Packet machine via SSH, and run the Packet attach script. Execute the following command on the Packet server:

# packet-block-storage-attach -m queue

Execute the following commands to mount the new file system to /mnt/disk1.

# mkdir /mnt/disk1
# mount -t ext4 /dev/mapper/volume-6e56c556-part1 /mnt/disk1

Update the fstab file so the disk remounts if we have to reboot the server. Add a new line to the fstab file with the following command:

# echo "/dev/mapper/volume-6e56c556-part1 /mnt/block ext4 _netdev 0 0" >> /etc/fstab

Create the user

Create a normal user. It could be the same user you used before or it could be a new user. The most important point is that the new user must be a member of the group sims.

In this second example, I choose to create a different user named lester:

# adduser lester
# usermod -aG sims lester
# usermod -aG sudo lester
# su lester
$ cd ~
$ mkdir ~/.ssh
$ chmod 700 ~/.ssh
$ sudo cat /root/.ssh/authorized_keys | tee /home/lester/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys
$ exit
# exit

Then log back in to the Packet server via SSH as the new user, lester.

Because we set up ACLs for the sims group and because the file system configuration is maintained on the block storage volume regardless of the server to which it is attached, the user lester should be able to write the the previously saved file /mnt/disk1/test.txt, even though it is owned by the user brian.

Run startup script

Now that the server is attached to the block storage volume and it is mounted to /mnt/disk1, run any startup scripts you may have saved on the volume and/or copy any files over to the server’s SSD. This will allow you to quickly rebuild your system so you can continue with your project.

List the files on the block storage volume with the commands:

# cd /mnt/disk1
# ls

Conclusion

Packet enables me to run multiple virtual machines with hardware virtualization support on a dedicated remote server. Packet provides me access to a powerful remote server when I need it at low cost, and allows me to delete that server when I am not using it while maintaining data files on persistent block storage.

In the near future, I will write a post describing how to install, set up, and run the Cloonix network emulator on a remote server provided by Packet. Cloonix uses QEMU/KVM to build the virtual machines that implement different network nodes so it must run on a machine that has access to hardware-level virtualization support. The Cloonix development team recently updated Cloonix to support Cisco router images, which means I need a computer that is more powerful than my laptop computer to run network emulation scenarios that include Cisco images.


  1. Packet gives you a temporary root password that expires after 24 hours and only works when using Packet’s Console access feature, which is outside the scope of this post 


Install and run the Cloonix network emulator on Packet.net

$
0
0

This tutorial shows how to set up the Cloonix network emulator on a Packet.net server. It builds on top of my previous post about how to set up a virtualization server on Packet.net. Now, I focus on a specific case: setting up the Cloonix network emulator on the virtualization server. You should read my previous post before reading this one.

Running Cloonix on a remote server enables users to work with more complex network emulation scenarios than would be possible on a standard laptop computer. For example. Cloonix recently added a feature which allows users to run Cisco router images in a Cloonix network emulation scenario. Cisco router images require a large amount of computer resources so I cannot run more than a few on my personal laptop computer. If I use a remote Packet server, I could run dozens of Cisco images in a network emulation scenario if I wanted to.

In this post, I will set up a Cloonix network emulation server on Packet.net so it can be started, stopped, and restarted relatively quickly.

Table of Contents

  1. Cloonix v37 overview
  2. Packet.net overview
  3. Tutorial summary
  4. Start a server and attach storage
  5. Load Files onto Block Storage Disk
  6. Set Up Cloonix on Server
  7. Run Cloonix on Packet.net
  8. Shutting Down so You Can Start Up Again
  9. Starting up again
  10. Example: Emulating Network of Cisco Routers on Cloonix
  11. Remote Cloonix Servers
  12. Conclusion
  13. Appendix A: 3router script

Bash prompt formats

In this post, I am jumping around a lot between my local PC and multiple userids on a remote Packet server. To show which computer and which user I ams using in the command examples, I will include the bash prompt in all listings.

The prompts will be:

  • local@T420:$ → My local account on my personal Linux computer
  • root@packet:# → The default root account on the Packet.net server

  • brian@packet:$ → The user account I created on the Packet.net server which, in my examples, is brian.

  • # → A command run on a node in the network emulation scenario

Cloonix v37 overview

Cloonix is an open-source network emulator that can spawn QEMU-KVM guests and link them together through sockets emulating LAN wires. It can also copy files between the host computer (or remote computers) and guests and it can run commands in the guests without any prior configuration1.

New features in version 37

Cloonix is an active project that is regularly updated. The newest version is version 37. The last time I used Cloonix, I used version 33. Below, I highlight some of the major changes in Cloonix since version 33.

Cloonix now supports Cisco Cloud Services Routers. The Cloonix web site provides instructions on how to build a QCOW image from a Cisco CSR 1000v ISO image. Users must provide their own Cisco software because it would be illegal for Cloonix to offer pre-built Cisco images. The main reason I am using a Packet.net server to run Cloonix is because Cisco router images require more CPU and RAM resources than my laptop computer support. If you use only open-source software to build and emulate network nodes, Cloonix works well on any modern laptop computer.

In version 37, Cloonix added two new client commands: cloonix_osh and cloonix_ocp. These are similar to the already-existing cloonix_ssh and cloonix_scp commands, except that they work via Linux sockets connected to the Cloonix nat object. The targeted quest’s DHCP-configured interface is connected to the nat object. This enables Cloonix users to connect to nodes running proprietary operating systems, like Cisco, that cannot be pre-configured or cannot be modified to run the Cloonix agent.

Packet.net overview

Packet is a hardware-as-a-service vendor that provides dedicated servers on demand at very low cost, which enables users to perform activities they could not run on a normal personal computer.

Since each Packet server is a dedicated server, users can run any hypervisor on the Packet server. It is usually not possible to run a hypervisor on a VM provided by other cloud vendors like Amazon2. For now, Packet.net seems to me to offer the most cost-effective way to run open-source network emulators that use KVM virtual machines in the cloud.

In my previous blog post about building a virtualization server on Packet, I described how to deploy a Packet server and attach it to a block storage device using the Packet web app. You must follow the procedures in my previous post before starting the procedures in this post.

Tutorial summary

After completing the procedures in my previous post, I assume you already have the following infrastructure set up on your Packet.net project:

  1. A Packet project is already created and SSH keys saved
  2. The project contains a block storage volume already formatted with an ext4 file system
  3. That volume’s file system has ACLs configured to allow members of the sims group full privileges on all files and folders created on the volume
  4. The volume is not attached currently to a server
  5. No Packet servers are deployed

This tutorial will walk through the Cloonix-set-up-on-Packet process in two phases:

  1. Phase 1: Deploy a small server which is used to create all directories and load all required Cloonix files onto the block storage volume.
    • This minimizes cost while we set up
    • We will also run a simple Cloonix network emulation project and save the project files on the block storage volume, as a demonstration of how to use Cloonix in this environment.
    • Then we will detach the block storage volume, shut the server down, and delete the server.
  2. Phase 2: Deploy a powerful — and more expensive — server to run a complex Cloonix network emulation using resource-hungry Cisco images.
    • I will demonstrate how this procedure allows users to swap the “brains” of the Cloonix emulator with more powerful servers as needed, while preserving the emulator’s data.
    • We will re-attach the block storage volume and rebuild the Cloonix system from files stored on the volume.

Start a server and attach storage

Use the Packet.net web app to provision a new Packet server in your project server by performing the following steps, which are described in more detail in my previous post about setting up a virtualization server:

  1. Start a small Type 0 server using the Packet web app
  2. Attach the block storage volume to the server using the Packet web app
  3. Log into the server using SSH from your Windows PC, Mac, or Linux PC).

For example, to log into the new Packet server from a Linux or Mac computer, run the following command:

local@T420:$ ssh -X -i /.ssh/packet root@203.0.113.25

In this example, the server’s IP address is 203.0.113.253 and the private SSH key is in the file ~/.ssh/packet.

Next, perform the following steps in the terminal window running on the Packet server. All these steps are also described in more detail in my previous post:

  1. Create a new user
    • Assign the user to the sims and sudo groups
    • Copy SSH keys to the new user
    • Log out and then log back in as the new user
  2. Mount the block storage volume

The commands I use are listed below. In my example, the user I create has username brian:

root@packet:# addgroup sims
root@packet:# adduser brian
root@packet:# usermod -aG sims brian 
root@packet:# usermod -aG sudo brian

Copy the SSH keys to the new user:

root@packet:# su brian
brian@packet:$ cd ~
brian@packet:$ mkdir ~/.ssh
brian@packet:$ chmod 700 ~/.ssh
brian@packet:$ sudo cat /root/.ssh/authorized_keys | tee /home/brian/.ssh/authorized_keys
brian@packet:$ chmod 600 ~/.ssh/authorized_keys

Log out of the server:

brian@packet:$ exit
root@packet:# exit

Login back in as the new user. For example:

local@T420:$ ssh -X -i /.ssh/packet brian@203.0.113.25

Then mount the partition on the block storage volume which, in this example, is /dev/mapper/volume-4d03ece6-part1.

brian@packet:$ sudo packet-block-storage-attach -m queue
brian@packet:$ sudo mkdir /mnt/disk1
brian@packet:$ sudo mount -t ext4 /dev/mapper/volume-4d03ece6-part1 /mnt/disk1
brian@packet:$ sudo echo '/dev/mapper/volume-4d03ece6-part1 /mnt/disk1 ext4 _netdev 0 0' | sudo tee -a /etc/fstab

Note that the ACLs for the block storage volume were configured in my previous post so I do not need to install the acl package again or change any ACLs at this time.

Load files onto block storage disk

Each Packet server come with SSD storage for the operating system and data files. When I compile Cloonix, the Cloonix files are installed on the SSD. However, I choose to keep all my Cloonix source files and all my Cloonix data files on the attached block storage volume.

I could choose to copy all these files to the SSD when starting a project and then copy any modified or new data files back to the block storage volume before I shut down and delete the server. However, I found it was easier to keep all files on the block storage volume when running Cloonix. Then I did not have to remember which files to copy back from the SSD to the block storage volume.

The block storage volume performance is fast enough for my needs. If you need very high performance, you might consider copying key files over to the SSD.

Load Cloonix source files onto block storage

Clone Cloonix code and compile it on the new server. You may need to install git:

brian@packet:$ sudo apt-get update
brian@packet:$ sudo apt-get install git
brian@packet:$ cd /mnt/disk1
brian@packet:$ git clone https://github.com/clownix/cloonix.git

The Cloonix source code and install scripts are now stored in the directory /mnt/disk1/cloonix/.

Updates

If you are updating existing Cloonix files on the block storage volume, run the following commands:

brian@packet:$ cd /mnt/disk1/cloonix
brian@packet:$ git pull

Load Cloonix file systems onto block storage

Download and create Cloonix project files on the block storage device so you can quickly access them every time you create a Cloonix server on Packet.

Go to the mounted block storage volume:

brian@packet:$ cd /mnt/disk1

Create Cloonix directories:

brian@packet:$ mkdir cloonix_data
brian@packet:$ mkdir cloonix_data/bulk

Download Cloonix file systems into the bulk directory. I chose the following Cloonix file systems for my experiments:

  • Debian Jessie – because, while it is an older release, it is used in most of the Cloonix demo scripts
  • Ubuntu Zesty – because the latest version of Ubuntu is nice to have

Run the following commands to download the required files. Note that these links may change as new version of Cloonix are released.

brian@packet:$ cd cloonix_data/bulk
brian@packet:$ wget http://cloonix.fr/bulk_stored/v-37-02/jessie.qcow2.xz
brian@packet:$ wget http://cloonix.fr/bulk_stored/v-38-00/zesty.qcow2.xz
brian@packet:$ unxz *.xz

I also downloaded a Cisco QCOW image and named it cisco_16.03.04.qcow2 so it works with the Cloonix Cisco demo script but I cannot provide the reader with a link. Readers must have their own legally-obtained Cisco ISO image and convert it to a QCOW image using the instructions in the Cloonix documentation. Then, download it to the bulk directory on the block storage device along with the other Cloonix file systems.

Load Cloonix demo scripts onto block storage

Download and unpack the Cloonix demo scripts to the block device:

brian@packet:$ cd /mnt/disk1/
brian@packet:$ wget http://cloonix.fr/demo_stored/v-37-02/cloonix_demo_all.tar.gz
brian@packet:$ tar -xvf cloonix_demo_all.tar.gz
brian@packet:$ rm cloonix_demo_all.tar.gz

This will create a directory named /mnt/disk1/cloonix_demo_all containing the demo files. Remember, because we set up ACLs for group permissions on this volume, all the files are in group sims so we can make the files available to other users by adding those users to the sims group.

Set up Cloonix on server

All the Cloonix files are stored on the block storage volume mounted at /mnt/disk1/. When we compile Cloonix, the executable and data files will be built on the Packet server’s SSD in the directory /usr/local/bin/cloonix/. This means that when we finish a project and delete the server, we will lose the Cloonix installation. We have to build Cloonix every time we start a new server.

To build Cloonix, set up KVM for all users. KVM must be enabled for users before we run the Cloonix install script.

brian@packet:$ sudo chmod 666 /dev/kvm

Compile Cloonix from the source files stored on the block storage volume:

brian@packet:$ cd /mnt/disk1/cloonix
brian@packet:$ sudo ./install_depends build
brian@packet:$ sudo apt-get install wireshark-qt 
brian@packet:$ sudo ./allclean  
brian@packet:$ ./doitall

Edit the Cloonix configuration file so that Cloonix will look for its working directories on the block storage volume:

brian@packet:$ sudo nano /usr/local/bin/cloonix/cloonix_config

Change the default directories at the top of the file to:

CLOONIX_WORK=/mnt/disk1/cloonix_data
CLOONIX_BULK=/mnt/disk1/cloonix_data/bulk

Save the file.

Run Cloonix on Packet.net

Users can create network emulation scenarios by building nodes and networks in the Cloonix GUI. In my example, I will use a cloonix_cli script to build a network quickly.

I created a three-router network script in Appendix A: The 3router.sh script. Copy the script from Appendix A so you can paste it into a file on the server.

I chose to create a directory named brian to store my scripts:

brian@packet:$ mkdir /mnt/disk1/cloonix_demo_all/brian
brian@packet:$ cd /mnt/disk1/cloonix_demo_all/brian
brian@packet:$ nano 3router.sh

Then paste the script text from Appendix A into the file. Save the file and then set the permissions to enable running the file:

brian@packet:$ chmod 775 3router.sh

Finally, run the file:

brian@packet:$ ./3router.sh

If you have configured everything correctly on the remote server and on your host PC, you will see the Cloonix GUI appear in an X window on your host PC and it will eventually display a three router network, as shown below.

Cloonix GUI X window: results of 3router script.

To test the emulation, double-click on the node PC-1 and, in the Xterm window that appears, enter the following command:

# ping -c 2 192.168.2.1

This should send ICMP packets to node PC-2. You should see successful ping messages in the Xterm window, as shown below:

Ping command running on emulated Linux node.

Now you may perform more investigations and make configuration changes.

Shutting Down so You Can Start Up Again

As discussed in my previous post, leaving a Packet server running while you are not using it costs you money. Even when the server is stopped, Packet will bill you the same hourly rate. You must delete the server to stop the charges.

We need to shut down the Cloonix server properly before deleting it so that our files are saved. The procedures we followed to set up Cloonix ensures that our source code and our data files — such as topology scripts and file systems — will be saved on the block storage volume and can be used when we rebuild a new Cloonix server.

First, save your work. Save the network topology and the configuration changes made on each node in the topology so you can restart the network emulation using the saved state in the future. Run the following command to save all files related to the network emulation scenario:

brian@packet:$ cloonix_cli nemo sav topo /mnt/disk1/cloonix_data/3routers

Then, kill the emulation:

brian@packet:$ cloonix_cli nemo kil

Remember that the next person to use Cloonix may have a different username configured on their server. Set the permissions of the new directory 3routers that Cloonix created when saving the topology so that users from the sims group can access the files:

brian@packet:$ chmod 775 /mnt/disk1/cloonix_data/3routers

Then, unmount and detach the block storage volume:

brian@packet:$ cd
brian@packet:$ umount /mnt/disk1
brian@packet:$ sudo packet-block-storage-detach

Then, once again, detach the block storage volume and delete the Packet server via the Packet.net web app.

The data we saved on the block storage volume remains there and we can use it when we are ready to run Cloonix again, on a new Packet server. While it is active, the block storage volume costs less than $1.50 per month.

Starting up again

At this point, we have a block storage volume on Packet.net that contains all the files we need to start up a new server, build Cloonix, and resume any activity from files we saved during our previous experiments.

Let’s start up a new server and get to the point where we can resume from saved files.

Also, let’s use a more powerful server this time. Previously, we used a Type 0 server which is low-cost but also low-powered. This time, start a Type 1 Packet server. This demonstrates how the “block storage volume method” enables us to swap out the “compute engine” any time we need more power while reusing all the config files and other data from previous experiments.

Follow the steps from the Start a server and attach storage chapter, above. Then, set up Cloonix on the new server using the steps from the Set up Cloonix on server chapter, above.

I summarize all the commands below:

Login to the new Packet server as root

local@T420:$ ssh -X -i /.ssh/packet root@203.0.113.25

Create new user. Remember, user must be a member of group sims.

root@packet:# addgroup sims
root@packet:# adduser brian
root@packet:# usermod -aG sims brian 
root@packet:# usermod -aG sudo brian
root@packet:# su brian
brian@packet:$ cd ~
brian@packet:$ mkdir ~/.ssh
brian@packet:$ chmod 700 ~/.ssh
brian@packet:$ sudo cat /root/.ssh/authorized_keys | tee /home/brian/.ssh/authorized_keys
brian@packet:$ chmod 600 ~/.ssh/authorized_keys

Log out of the server:

brian@packet:$ exit
root@packet:# exit

From your local PC, login as new user:

local@T420:$ ssh -X -i /.ssh/packet brian@203.0.113.25

Remember to also start Xming on your PC or X-Quartz on your Mac. If you are using a Linux PC, X will already be started.

Then, mount the block storage volume

brian@packet:$ sudo packet-block-storage-attach -m queue
brian@packet:$ sudo mkdir /mnt/disk1
brian@packet:$ sudo mount -t ext4 /dev/mapper/volume-e7bbc5cb-part1 /mnt/disk1
brian@packet:$ sudo echo '/dev/mapper/volume-e7bbc5cb-part1 /mnt/disk1 ext4 _netdev 0 0' | sudo tee -a /etc/fstab    

Compile Cloonix from the source files stored on the block storage volume:

brian@packet:$ cd /mnt/disk1/cloonix
brian@packet:$ sudo ./install_depends build
brian@packet:$ sudo apt-get install wireshark-qt
brian@packet:$ sudo ./allclean  
brian@packet:$ sudo chmod 666 /dev/kvm
brian@packet:$ ./doitall

Edit the Cloonix configuration file so that Cloonix will look for its working directories on the block storage volume:

brian@packet:$ sudo nano /usr/local/bin/cloonix/cloonix_config

Change the default directories at the top of the file to:

CLOONIX_WORK=/mnt/disk1/cloonix_data
CLOONIX_BULK=/mnt/disk1/cloonix_data/bulk

Save the file.

Now, start the saved topology 3routers:

brian@packet:$ /mnt/disk1/cloonix_data/3routers/nemo.sh

This will start up the network emulation scenario from the saved state, with all configuration changes made during the previous experiments intact.

After completing any experiments, you may save the topology again and then kill it. Remember that you must use a new name every time you save. You cannot save over an existing topology.

brian@packet:$ cloonix_cli nemo sav topo /mnt/disk1/cloonix_data/3routers-v2

Then kill the emulation:

brian@packet:$ cloonix_cli nemo kil

Example: Cisco routers on Cloonix

As I wrote earlier, I am using Packet.net so I can run network emulation scenarios that cannot run on my personal laptop computer. For example, I can now run resource-hungry commercial router images in Cloonix. Cloonixv37 adds some features to support Cisco IOS-on-Linux images.

In this example, I will run the Cloonix Cisco script that is provided with the standard Cloonix demos. I have already downloaded and saved the Cisco file system on the block storage volume.

Go to the Cisco demo folder:

brian@packet:$ cd /mnt/disk1/cloonix_demo_all/cisco

NOTE: When I ran the Cisco demo script I found an error. I corresponded with the Cloonix developers and they told me to modify script and replace “mud” with “cnf” in all occurrences. So, if the Cisco script does not work for you, check the file and make this change.

Run the Cisco script:

brian@packet:$ ./cisco.sh

After the script runs, start the Cloonix GUI to view the topology:

Brian@packet:$ cloonix_gui nemo

Cisco servers stay red in the Cloonix GUI because they do not — and cannot — have the cloonix agent installed. But they are really running and you can connect to the Cisco terminal interface by double-clicking on each Cisco router on the Cloonix GUI.

The Cisco router password is “cisco”.

Cisco routers running on Packet.net server

Now we can add Cisco routers into network emulation experiments using the Cloonix network emulator.

Using a remote Cloonix Server

We’ve accessed the Cloonix GUI using X windows because this will work with your local PC for all major operating systems: Windows, Mac, and Linux. But, if you are using a Linux PC and would like a smoother GUI experience you may install Cloonix on both your local PC and on the remote Packet server and then configure your local PC as the Cloonix GUI and the remote Packet server as the Cloonix server.

First, install Cloonix on your local Linux PC:

local@T420:$ sudo apt-get update
local@T420:$ sudo apt-get install git
local@T420:$ cd ~
local@T420:$ git clone https://github.com/clownix/cloonix.git
local@T420:$ sudo chmod 666 /dev/kvm
local@T420:$ cd ~/cloonix
local@T420:$ sudo ./install_depends build
local@T420:$ sudo apt-get install wireshark-qt   
local@T420:$ ./doitall

Edit the Cloonix configuration file on the local PC so that the Cloonix GUI can connect to the remote Cloonix server. Change the IP address of the remote nemo server.

local@T420:$ sudo nano /usr/local/bin/cloonix/cloonix_config

Change the default configuration of the nemo server to:

CLOONIX_NET: nemo {
  cloonix_ip       127.0.0.1
  cloonix_port     43211
  cloonix_passwd   nemoclown
}

Then save the file.

Now, assuming you have started the Cloonix server nemo on the remote Packet.net server, you may run the Cloonix GUI on your local machine and interact with the network topology running on the remote server.

local@T420:$ cloonix_gui nemo

Cloonix server running on remote server, Cloonix GUI running on local PC.

Above, we see the Cloonix GUI running on my local laptop computer. The Cloonix server is running on a remote Packet server.

Conclusion

I showed how we can cost-effectively install and use the Cloonix network emulator on a powerful remote server provided by Packet.net. The main point of this post is to show how one can use the block storage to save project files and speed up the re-provisioning of new servers when you need them. This makes it easier to control costs since you must completely delete a server to stop charges, so you will want a way to rebuild your network emulation setup to a known state, quickly and relatively easily.

Similar projects

Similar work has been published related to running other network emulators on Packet.net. The GNS3 team described how to run the GNS3 network emulator on Packet.net, Justin Guagliata described how to run the EVE-NG network emulator on Packet.net, and the Packet.net team described how to run the Cisco VIRL network emulator on Packet.net.

Appendix A: The 3router.sh script


#!/bin/bash

#-------------------------------------------------------
# Variables
#
# Modify these variables to match the directory
# structure you created on your own system and to
# match the file system you chose to use
#-------------------------------------------------------
DIST=jessie
NET=nemo
ROUTER_NAME=router-
PC_NAME=PC-
CLOONIX_CONFIG=/usr/local/bin/cloonix/cloonix_config

#-------------------------------------------------------
# Set Cloonix Bulk directory from cloonix_config file
#-------------------------------------------------------
CLOONIX_BULK=$(cat $CLOONIX_CONFIG |grep CLOONIX_BULK | awk -F = "{print \$2}")
BULK=$(eval echo $CLOONIX_BULK)

#-------------------------------------------------------
# Start cloonix. Comment-out these two lines if we will
# already have Cloonix started before running this 
# script 
#-------------------------------------------------------

echo "Starting Cloonix"
cloonix_net ${NET}
cloonix_gui ${NET}
echo "Cloonix started"

#-------------------------------------------------------
# Start KVMs and define interfaces
# The sleep timers should be set to a value that works
# best on your computer. Since I am running this
# on a 5-year-old laptop, I set the sleep timers to a 
# higher value of 15 seconds 
#-------------------------------------------------------

echo "Building topology"

echo "adding NAT"
cloonix_cli ${NET} add nat nat01
cloonix_cli ${NET} add lan nat01 0 nat_lan


# Router-1
echo "Adding ${ROUTER_NAME}1"
cloonix_cli ${NET} add kvm ${ROUTER_NAME}1 1000 1 4 ${BULK}/${DIST}.qcow2 --balloon &
sleep 15
cloonix_cli ${NET} add lan ${ROUTER_NAME}1 0 lan06
cloonix_cli ${NET} add lan ${ROUTER_NAME}1 1 lan01
cloonix_cli ${NET} add lan ${ROUTER_NAME}1 2 lan03
cloonix_cli ${NET} add lan ${ROUTER_NAME}1 3 nat_lan

# Router-2
echo "Adding ${ROUTER_NAME}2"
cloonix_cli ${NET} add kvm ${ROUTER_NAME}2 1000 1 4 ${BULK}/${DIST}.qcow2 --balloon &
sleep 15
cloonix_cli ${NET} add lan ${ROUTER_NAME}2 0 lan04
cloonix_cli ${NET} add lan ${ROUTER_NAME}2 1 lan01
cloonix_cli ${NET} add lan ${ROUTER_NAME}2 2 lan02
cloonix_cli ${NET} add lan ${ROUTER_NAME}2 3 nat_lan

# Router-3
echo "Adding ${ROUTER_NAME}3"
cloonix_cli ${NET} add kvm ${ROUTER_NAME}3 1000 1 4 ${BULK}/${DIST}.qcow2 --balloon &
sleep 15
cloonix_cli ${NET} add lan ${ROUTER_NAME}3 0 lan05
cloonix_cli ${NET} add lan ${ROUTER_NAME}3 1 lan03
cloonix_cli ${NET} add lan ${ROUTER_NAME}3 2 lan02
cloonix_cli ${NET} add lan ${ROUTER_NAME}3 3 nat_lan

# PC-1
echo "Adding ${PC_NAME}1"
cloonix_cli ${NET} add kvm PC-1 1000 1 2 ${BULK}/${DIST}.qcow2 --balloon &
sleep 15
cloonix_cli ${NET} add lan ${PC_NAME}1 0 lan06
cloonix_cli ${NET} add lan ${PC_NAME}1 1 nat_lan

# PC-2
echo "Adding ${PC_NAME}2"
cloonix_cli ${NET} add kvm ${PC_NAME}2 1000 1 2 ${BULK}/${DIST}.qcow2 --balloon &
sleep 15
cloonix_cli ${NET} add lan ${PC_NAME}2 0 lan04
cloonix_cli ${NET} add lan ${PC_NAME}2 1 nat_lan

# PC-3
echo "Adding ${PC_NAME}3"
cloonix_cli ${NET} add kvm ${PC_NAME}3 1000 1 2 ${BULK}/${DIST}.qcow2 --balloon &
sleep 15
cloonix_cli ${NET} add lan ${PC_NAME}3 0 lan05
cloonix_cli ${NET} add lan ${PC_NAME}3 1 nat_lan

#-------------------------------------------------------
# Stop motion
#-------------------------------------------------------
cloonix_cli ${NET} cnf lay stop
sleep 1

#-------------------------------------------------------
# Set size of Cloonix Graph window
#-------------------------------------------------------
cloonix_cli ${NET} cnf lay width_height 574 489
sleep 1
cloonix_cli ${NET} cnf lay scale 216 212 574 489
sleep 1

#-------------------------------------------------------
# Move nodes to their final places in the graph
#-------------------------------------------------------
echo "Moving nodes in topology"
cloonix_cli ${NET} cnf lay abs_xy_kvm ${PC_NAME}3 218 402
cloonix_cli ${NET} cnf lay abs_xy_eth ${PC_NAME}3 0 0
cloonix_cli ${NET} cnf lay abs_xy_eth ${PC_NAME}3 1 239
cloonix_cli ${NET} cnf lay abs_xy_kvm ${PC_NAME}2 425 45
cloonix_cli ${NET} cnf lay abs_xy_eth ${PC_NAME}2 0 194
cloonix_cli ${NET} cnf lay abs_xy_eth ${PC_NAME}2 1 254
cloonix_cli ${NET} cnf lay abs_xy_kvm ${PC_NAME}1 1 49
cloonix_cli ${NET} cnf lay abs_xy_eth ${PC_NAME}1 0 94
cloonix_cli ${NET} cnf lay abs_xy_eth ${PC_NAME}1 1 37
cloonix_cli ${NET} cnf lay abs_xy_kvm ${ROUTER_NAME}3 221 253
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}3 0 154
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}3 1 264
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}3 2 15
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}3 3 301
cloonix_cli ${NET} cnf lay abs_xy_kvm ${ROUTER_NAME}2 291 118
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}2 0 48
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}2 1 227
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}2 2 160
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}2 3 298
cloonix_cli ${NET} cnf lay abs_xy_kvm ${ROUTER_NAME}1 131 128
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}1 0 260
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}1 1 57
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}1 2 117
cloonix_cli ${NET} cnf lay abs_xy_eth ${ROUTER_NAME}1 3 5
cloonix_cli ${NET} cnf lay abs_xy_lan lan05 216 325
cloonix_cli ${NET} cnf lay abs_xy_lan lan02 262 186
cloonix_cli ${NET} cnf lay abs_xy_lan lan04 359 84
cloonix_cli ${NET} cnf lay abs_xy_lan lan03 172 191
cloonix_cli ${NET} cnf lay abs_xy_lan lan01 208 117
cloonix_cli ${NET} cnf lay abs_xy_lan lan06 69 85
cloonix_cli ${NET} cnf lay abs_xy_sat nat01 184 -2
cloonix_cli ${NET} cnf lay abs_xy_lan nat_lan 184 -10
sleep 5

#-------------------------------------------------------
# Hide the nat01 and connected 
# interfaces
#-------------------------------------------------------
cloonix_cli ${NET} cnf lay hide_lan nat_lan 1
cloonix_cli ${NET} cnf lay hide_sat nat01 1
cloonix_cli ${NET} cnf lay hide_eth ${ROUTER_NAME}1 3 1
cloonix_cli ${NET} cnf lay hide_eth ${ROUTER_NAME}2 3 1
cloonix_cli ${NET} cnf lay hide_eth ${ROUTER_NAME}3 3 1
cloonix_cli ${NET} cnf lay hide_eth ${PC_NAME}1 1 1
cloonix_cli ${NET} cnf lay hide_eth ${PC_NAME}2 1 1
cloonix_cli ${NET} cnf lay hide_eth ${PC_NAME}3 1 1

#-------------------------------------------------------
# wait 30 seconds for all VMs to finish starting up
#-------------------------------------------------------
echo "Waiting 60 seconds for all nodes to start"
sleep 60
echo "Topology is ready"

#-------------------------------------------------------
# Install quagga on the three routers
# Each router is already connected to the slirp-lan
# on its highest-numbered interface, eth3 
#-------------------------------------------------------

echo "Installing quagga software"

cloonix_ssh ${NET} ${ROUTER_NAME}1 "dhclient eth3"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "apt-get update"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "apt-get --allow-unauthenticated --assume-yes install quagga"

cloonix_ssh ${NET} ${ROUTER_NAME}2 "dhclient eth3"
cloonix_ssh ${NET} ${ROUTER_NAME}2 "apt-get update"
cloonix_ssh ${NET} ${ROUTER_NAME}2 "apt-get --allow-unauthenticated --assume-yes install quagga"

cloonix_ssh ${NET} ${ROUTER_NAME}3 "dhclient eth3"
cloonix_ssh ${NET} ${ROUTER_NAME}3 "apt-get update"
cloonix_ssh ${NET} ${ROUTER_NAME}3 "apt-get --allow-unauthenticated --assume-yes install quagga"

sleep 30
echo "Completed software install"

#-------------------------------------------------------
# Write quagga config files on Router-1.
#
# One method is to use cloonix_dbssh to execute echo or
# sed commands on at a time to build configuration files
# line-by-line.
#-------------------------------------------------------

echo "starting ${ROUTER_NAME}1 configuration"

# Router-1 ospfd.conf file
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface eth0' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface eth1' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface eth2' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface eth3' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface lo' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'router ospf' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' passive-interface eth0' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' network 192.168.1.0/24 area 0.0.0.0' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' network 192.168.100.0/24 area 0.0.0.0' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' network 192.168.101.0/24 area 0.0.0.0' >>/etc/quagga/ospfd.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'line vty' >>/etc/quagga/ospfd.conf"

# Router-1 zebra.conf file
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface eth0' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' ip address 192.168.1.254/24' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' ipv6 nd suppress-ra' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface eth1' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' ip address 192.168.100.1/24' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' ipv6 nd suppress-ra' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface eth2' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' ip address 192.168.101.2/24' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' ipv6 nd suppress-ra' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface eth3' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo ' ipv6 nd suppress-ra' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'interface lo' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'ip forwarding' >>/etc/quagga/zebra.conf"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'line vty' >>/etc/quagga/zebra.conf"

# modify /etc/quagga/daemons file
cloonix_ssh ${NET} ${ROUTER_NAME}1 "sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons"
cloonix_ssh ${NET} ${ROUTER_NAME}1 "sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons"

# modify /etc/environment file
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'VTYSH_PAGER=more' >>/etc/environment"
 
# modify /etc/bash.bashrc file
cloonix_ssh ${NET} ${ROUTER_NAME}1 "echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc"


echo "completed ${ROUTER_NAME}1 configuration"

#-----------------------------------------------------
# Write quagga config files on ${ROUTER_NAME}2
#
# Another method is to create temporary files 
# containing the required configuration and then copy 
# them to ${ROUTER_NAME}2 using the cloonix_dpscp command.
# 
# This results in a script file that is easier for 
# humans to read.
#-----------------------------------------------------

echo "starting ${ROUTER_NAME}2 configuration"

mkdir /tmp/${ROUTER_NAME}2

# Router-2 ospfd.conf file
cat > /tmp/${ROUTER_NAME}2/ospfd.conf << EOF
interface eth0
!
interface eth1
!
interface eth2
!
interface lo
!
router ospf
 passive-interface eth0
 network 192.168.2.0/24 area 0.0.0.0
 network 192.168.100.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
!
line vty
!
EOF

# Router-2 zebra.conf file
cat > /tmp/${ROUTER_NAME}2/zebra.conf << EOF
interface eth0
 ip address 192.168.2.254/24
 ipv6 nd suppress-ra
!
interface eth1
 ip address 192.168.100.2/24
 ipv6 nd suppress-ra
!
interface eth2
 ip address 192.168.102.2/24
 ipv6 nd suppress-ra
!
interface lo
!
ip forwarding
!
line vty
!
EOF

# move files to ${ROUTER_NAME}2
cloonix_scp ${NET} -r /tmp/${ROUTER_NAME}2/* ${ROUTER_NAME}2:/etc/quagga

# modify /etc/quagga/daemons file
cloonix_ssh ${NET} ${ROUTER_NAME}2 "sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons"
cloonix_ssh ${NET} ${ROUTER_NAME}2 "sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons"

# modify /etc/environment file
cloonix_ssh ${NET} ${ROUTER_NAME}2 "echo 'VTYSH_PAGER=more' >>/etc/environment"
 
# modify /etc/bash.bashrc file
cloonix_ssh ${NET} ${ROUTER_NAME}2 "echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc"

echo "completed ${ROUTER_NAME}2 configuration"

#-------------------------------------------------------
# Write quagga config files on ${ROUTER_NAME}3
#
# Create a temporary file and then copy it
# to ${ROUTER_NAME}3.
#-------------------------------------------------------

echo "starting ${ROUTER_NAME}3 configuration"

mkdir /tmp/${ROUTER_NAME}3

# Router-3 ospfd.conf file
cat > /tmp/${ROUTER_NAME}3/ospfd.conf << EOF
interface eth0
!
interface eth1
!
interface eth2
!
interface lo
!
router ospf
 passive-interface eth0
 network 192.168.3.0/24 area 0.0.0.0
 network 192.168.101.0/24 area 0.0.0.0
 network 192.168.102.0/24 area 0.0.0.0
!
line vty
!
EOF

# $Router-3 zebra.conf file
cat > /tmp/${ROUTER_NAME}3/zebra.conf << EOF
interface eth0
 ip address 192.168.3.254/24
 ipv6 nd suppress-ra
!
interface eth1
 ip address 192.168.101.1/24
 ipv6 nd suppress-ra
!
interface eth2
 ip address 192.168.102.1/24
 ipv6 nd suppress-ra
!
interface lo
!
ip forwarding
!
line vty
!
EOF

# move files to ${ROUTER_NAME}3
cloonix_scp ${NET} -r /tmp/${ROUTER_NAME}3/* ${ROUTER_NAME}3:/etc/quagga

# modify /etc/quagga/daemons file
cloonix_ssh ${NET} ${ROUTER_NAME}3 "sed -i s'/zebra=no/zebra=yes/' /etc/quagga/daemons"
cloonix_ssh ${NET} ${ROUTER_NAME}3 "sed -i s'/ospfd=no/ospfd=yes/' /etc/quagga/daemons"

# modify /etc/environment file
cloonix_ssh ${NET} ${ROUTER_NAME}3 "echo 'VTYSH_PAGER=more' >>/etc/environment"
 
# modify /etc/bash.bashrc file
cloonix_ssh ${NET} ${ROUTER_NAME}3 "echo 'export VTYSH_PAGER=more' >>/etc/bash.bashrc"

echo "completed ${ROUTER_NAME}3 configuration"

#-------------------------------------------------------
# Set up interfaces and default route on ${PC_NAME}1
#-------------------------------------------------------

echo "starting ${PC_NAME}1 configuration"

# interfaces file
cloonix_ssh ${NET} ${PC_NAME}1 "echo 'auto eth0' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}1 "echo 'iface eth0 inet static' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}1 "echo '   address 192.168.1.1' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}1 "echo '   netmask 255.255.255.0' >>/etc/network/interfaces"

# rc.local file
cloonix_ssh ${NET} ${PC_NAME}1 "sed -i '/exit 0/i ip route add 192.168.0.0/16 via 192.168.1.254 dev eth0' /etc/rc.local"

echo "completed ${PC_NAME}1 configuration"

#-------------------------------------------------------
# Set up interfaces and default route on ${PC_NAME}2
#-------------------------------------------------------

echo "starting ${PC_NAME}2 configuration"

# interfaces file
cloonix_ssh ${NET} ${PC_NAME}2 "echo 'auto eth0' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}2 "echo 'iface eth0 inet static' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}2 "echo '   address 192.168.2.1' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}2 "echo '   netmask 255.255.255.0' >>/etc/network/interfaces"

# rc.local file
cloonix_ssh ${NET} ${PC_NAME}2 "sed -i '/exit 0/i ip route add 192.168.0.0/16 via 192.168.2.254 dev eth0' /etc/rc.local"

echo "completed ${PC_NAME}2 configuration"

#-------------------------------------------------------
# Set up interfaces and default route on ${PC_NAME}3
#-------------------------------------------------------

echo "starting ${PC_NAME}3 configuration"

# interfaces file
cloonix_ssh ${NET} ${PC_NAME}3 "echo 'auto eth0' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}3 "echo 'iface eth0 inet static' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}3 "echo '   address 192.168.3.1' >>/etc/network/interfaces"
cloonix_ssh ${NET} ${PC_NAME}3 "echo '   netmask 255.255.255.0' >>/etc/network/interfaces"

# rc.local file
cloonix_ssh ${NET} ${PC_NAME}3 "sed -i '/exit 0/i ip route add 192.168.0.0/16 via 192.168.3.254 dev eth0' /etc/rc.local"

echo "completed ${PC_NAME}3 configuration"

#-------------------------------------------------------
# Reboot all nodes to enable all changes
#-------------------------------------------------------

echo "rebooting nodes"
cloonix_ssh ${NET} ${PC_NAME}1 "reboot"
echo "${PC_NAME}1"
sleep 5
cloonix_ssh ${NET} ${PC_NAME}2 "reboot"
echo "${PC_NAME}2"
sleep 5
cloonix_ssh ${NET} ${PC_NAME}3 "reboot"
echo "${PC_NAME}3"
sleep 5
cloonix_ssh ${NET} ${ROUTER_NAME}1 "reboot"
echo "${ROUTER_NAME}1"
sleep 5
cloonix_ssh ${NET} ${ROUTER_NAME}2 "reboot"
echo "${ROUTER_NAME}2"
sleep 5
cloonix_ssh ${NET} ${ROUTER_NAME}3 "reboot"
echo "${ROUTER_NAME}3"
sleep 5
echo "Wait until nodes complete rebooting, then start your testing."

#-------------------------------------------------------
# Setup is now complete
#------------------------------------------------------- 

  1. From Cloonix documentation at http://cloonix.net/doc_stored/build-37-02/singlehtml/index.html 

  2. This is changing. Google recently announced nested virtualization support so users can run a hypervisor on a VM — KVM in KVM for example. Oracle offers nested virtualization support via their Ravello service. 

  3. I am using example IP addresses defined for documentation in RFC5737 https://tools.ietf.org/html/rfc5737 

Enable nested virtualization on Google Cloud

$
0
0

Google Cloud Platform introduced nested virtualization support in September 2017. Nested virtualization is especially interesting to network emulation research since it allow users to run unmodified versions of popular network emulation tools like GNS3, EVE-NG, and Cloonix on a cloud instance.

Google Cloud supports nested virtualization using the KVM hypervisor on Linux instances. It does not support other hypervisors like VMware ESX or Xen, and it does not support nested virtualization for Windows instances.

In this post, I show how I set up nested virtualization in Google Cloud and I test the performance of nested virtual machines running on a Google Cloud VM instance.

Summary

I assume that you are a new users of Google Cloud. If you are already experienced with Google Cloud, you may skip to the nested-virtualization section and then to the test results.

In this post, I show how to create a Google Cloud account. I suggest you take advantage of the generous free trial offered by Google. Then, install the gcloud command-line tool on your PC. Initialize the gcloud tool configuration to set up networking between your PC and Google Cloud and to set up your first project. You may need to define more than one gcloud configuration if you use your PC in multiple networks.

Next, create a new VM image that has a license to use the new nested-virtualization feature. Use this image to launch new VMs. Be sure to set up your project SSH keys correctly so you can access new VMs using standard SSH clients.

Finally, I installed and ran a series of benchmark tests on baseline hardware, on Google Cloud VM instances and on nested VMs running on Google Cloud instances.

Create Google Cloud account

Sign up for a free trial on Google Cloud. Google offers a generous three hundred dollar credit that is valid for a period of one year. You pay nothing until either you have consumed $300 worth of services or one year has passed. I have been hacking on Google cloud for one month, using relatively large VMs, and I have consumed only 25% of my credits.

If you already use Google services like G-mail, then you already have a Google account and adding Google Cloud to your account is easy. Just sign in with your existing Google credentials. Then set up your billing information.

Install gcloud command-line tool

At the time I am writing this post, nested virtualization is still a beta feature in Google Cloud so it cannot be enabled from the Google Cloud Console web app. We must use the gcloud Command Line Interface or the API to create a custom image that supports nested virtualization. That custom image can then be used to start instances that support nested virtualization. Also, gcloud commands provide a faster way to perform most Google Cloud operations, once you learn the CLI.

Install the gcloud command-line tool on your PC. Google has gcloud clients for Windows, Mac, and PC. In this example, I am installing the gcloud command line tool in Ubuntu Linux.

To install gcloud on my Linux PC, I executed the following commands in a Terminal window:

$ sudo apt install curl
$ export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"
$ echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo apt-get update && sudo apt-get install google-cloud-sdk

Set up your Google Cloud project

Now, initialize the gcloud command to set up networking to reach Google from your local network — gcloud will help you configure proxy servers, for example — and to set up your Google Cloud project. To initialize the gcloud configuration, run the command:

$ gcloud init

Follow the prompts to set up your network connection (if needed), your Google account credentials, create a new project (in my case, I named my project, net-sims), set the default project, and set the default data center region and zone. Be sure to choose a zone that supports Haswell processors or later by default.

Run gcloud --help to see the Cloud Platform services you can interact with using gcloud. And run gcloud help <<COMMAND>> to get help on any gcloud command.

Multiple configurations

The gcloud command supports multiple configurations. You may need multiple configurations to support different networking setups. For example, you may need different network configurations if you use your PC on multiple networks, or you may want to set different defaults for various situations. Create more than one configuration by running gcloud init again and choosing Create a new configuration at the first prompt.

You can see all your gcloud configurations by running the command:

$ gcloud config configurations list

You can switch to a different configuration with the command:

$ gcloud config configurations activate <<CONFIG-NAME>>

You can view the settings of any configuration by running the command:

$ gcloud config list --configuration <<CONFIG-NAME>>

Create nested-virtualization-enabled image

The Google Cloud documentation shows how to create an image from a VM instance’s disk image. This allows you to add nested virtualization support to an existing VM but, when you want to create a new VM that supports nested virtualization, that procedure seems like one too many steps to me.

I prefer to build nested-virtualization images directly from the base images available in the Google Cloud image library.

To build a custom image based on an available base image, you must first find the image project and image family for the base image you wish to use. First list all available images using the command:

$ gcloud compute images list

This command prints a table of images to choose from. I show a few examples from the list below:

NAME                          PROJECT          FAMILY
centos-7-v20171213            centos-cloud     centos-7
ubuntu-1604-xenial-v20171212  ubuntu-os-cloud  ubuntu-1604-lts
ubuntu-1710-artful-v20171213  ubuntu-os-cloud  ubuntu-1610

Users enable nested virtualization by building a custom image with an enable-vmx license key added. The following command creates a new image named nested-virt using the latest Ubuntu 16.04 image in the library and the enable-vmx license key:

$ gcloud compute images create nested-virt \
  --source-image-project=ubuntu-os-cloud \
  --source-image-family=ubuntu-1604-lts \
  --licenses="https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"

As you can see above, I specify the image family in the command to choose the latest available Ubuntu 16.04 LTS image. Since the default project is already set, I need to also specify the image’s project so the gcloud command can find it.

Now we have a new custom image saved in our default project. We will use this image to create VM instances that support nested virtualization.

Create virtual machine

Now create a new VM instance using the nested-virt image you created in the previous step.

Add SSH keys

First, create a project-level SSH key pair. Google Cloud will automatically add the public key to every VM you create in the project.

It is easier to add SSH keys using the Google Cloud Console web app. Access the web app at the following URL: https://console.cloud.google.com/home/dashboard. Then, generate an SSH key pair on your local PC. List the public key and copy it to your clipboard.

Click on the menu icon on the top right-hand corner of the web page. Click on Compute Engine, then click on “Metadata*. In the window that appears, click on SSH Keys.

You may see some SSH keys that were generated by Google. These are used when you use the built-in SSH function in the web app to access a VM instance. They are not useful when using standard SSH clients. You need to add your own public key.

Click on the Edit button. Then click on Add Item at the bottom of the screen. Paste you public key into the text box that appears. Then, edit the key so that the text at the end is exactly the same as your Google Cloud userid (in this example: I replaced text like brian@t420 with my Google Cloud userid, which is brian_not_realname). The SSH public key text should look like:

ssh-rsa AAAAB3NzaC1yc2AQEAhNRXc1RClJnotrealkeytpYuEk/FuGBLRaP29AI4BKx+notrealkeyM30AayFM9G0iN5HhfRwxUcs7hqxQKnotrealkeyTVJo0Q/8fpPy3PC3x3B+JznotrealkeyT9vTeJdedGcs7Zc673aUARCDkhijncJjufz1rFtyPlwNQd/h7NUFKjPnotrealkeyLpCrgbWZAIzrQDB8S7zZ2KAaHVM0swQJZYlGYbnotrealkyAZmeEx+dmBeJsw/CwlUWwM== brian_not_realname

Then click Save.

An important note about the format of the public SSH key: When using the Google Cloud web app, you need to modify the SSH public key you paste into the web app to that it ends with the your Google Cloud username. As you can see above, I changed the extra data at the end of the key to my Google Cloud userid, which is “brian_not_realname”. The SSH public key suffix must be your Google Cloud username.

Create VM instance using Google Console

Since we are already using the Console at this point, we will create a new VM using the Console.

Click on VM instances in the Compute Engine sidebar menu. Click on Create Instance near the top of the window. Fill in the instance name, use the default zone (unless you have a reason to change it), pick the machine size, then select the boot disk image. Click the Change button next to the Boot disk prompt.

In the window that pops up, click on the Custom Images tab at the top. Find the nested-virt image you created in the previous step and pick it using the radio button next to it. Then select the type (SSD or HD) and the size that best supports your application. I usually choose SSD and 40 GB in size. Click the Select button.

Leave all other settings at default values, unless you have a reason to change them.

Note that, before starting the VM, you may click on the Equivalent REST or command line link to see the equivalent gcloud command that would perform the same action. It’s a good idea to copy and paste this command into a text file and use it as a basis for crafting gcloud commands to start VMs in the future.

Then, click on the Create button. Within one minute, you will have a new VM running that supports nested virtualization.

You will see the VM on the VM instances page. From this page you can copy the machine’s external IP address for use in accessing it via an SSH client.

Create VM instance using gcloud commands

I find using the gcloud to be a faster way to build multiple virtual machines. It helps to have complex commands saved in a text file so you can edit them and re-use them to build more virtual machines.

When we clicked on the Equivalent REST or command line link above, we saw a long, complex gcloud command. We do not actually need most of these parameters since they merely describe the project’s default settings. Assume we will use the default project settings, we may modify and reduce the command syntax to something like:

$ gcloud compute instances create "nest-1" --zone "us-east1-b" --machine-type "n1-standard-8"  --min-cpu-platform "Intel Haswell" --image "nested-virt" --boot-disk-size "40" --boot-disk-type "pd-ssd" --boot-disk-device-name "nest-1"

Note that we specified the minimum CPU platform in the new version of the command. Google Cloud requires a Haswell or later processor to run instances that support nested virtualization. Users may choose a region that supports these more modern processors by default or they may select a minimum processor type when starting an instance using the gcloud CLI or the API. If you are using a zone in which the default instance type uses a platform older than Haswell, you will need to start VMs using gcloud commands so you can specify the minimum platform. That is one case where you will need to use gcloud commands to start a VM instance instead of the Console.

When the VM starts, it will output its IP address on the screen. You may use this to address connect to the VM.

NAME    ZONE        MACHINE_TYPE   INTERNAL_IP  EXTERNAL_IP   STATUS
nest-1  us-east1-b  n1-standard-8  10.142.0.2   203.0.113.51  RUNNING

You may get information about the instances in your project at any time by running the gcloud command:

$ gcloud compute instances list

Connect to virtual machine

Connect to the virtual machine using an SSH client. I’ve provided examples of how to configure SSH clients for all the major operating systems in my previous post about building a virtualization server.

Alternatively, you may access the VM instance using the SSH function built into the Google Cloud Console. This is a convenient way to access VMs to troubleshoot access problems. To access the VM instance, just click on the SSH button next to it in the Console.

But, you will get more function, including X windows forwarding, by using a standard SSH client on your computer.

In my case, I will access the instance using OpenSSH from my Linux PC (note that I named my private key google:

$ ssh -X -i /.ssh/google brian_not_realname@203.0.113.51

Test nested virtualization

Google stated that L2 virtual machines running nested on a Google Cloud VM instance should incur less than a ten percent performance penalty. My testing shows this to be true. In fact, the performance penalty for CPU tasks is very low. I did not test the I/O performance penalty. Network emulation performance is mostly a CPU-bound problem.

After connecting to the Google Cloud VM instance, check that nested virtualization is enabled:

$ grep -cw vmx /proc/cpuinfo

You should see a non-zero result that corresponds to the number of vCPUs provided by the instance. In this example: 8.

Next I used the Cloonix network emulator to create one VM running Debian Linux and connected it to a NAT interface so it could connect to the Internet.

Next, I installed benchmark software on the L1 Google VM and the L2 Debian VM and ran the included benchmark tests.

$ sudo apt-get update
$ sudo apt-get install hardinfo
$ hardinfo &

I present my test results below.

Test results

I tested the same benchmarks on a hardware server and on different CPU platforms in Google Cloud. I saw that the differences in performance were mostly due to clock rate. The table below lists my benchmark results. All results are in seconds. Lower is better.

VM Type Clock (GHz) Blowfish Fibonacci N-Queens FPU FFT FBENCH
Intel XEON Skylake 4-core CPU (HW baseline) 3.7 1.05 1.07 0.39 0.63 2.58
L1 VM with 8 vCPU on Intel XEON Skylake 4-core CPU 3.7 1.1 1.04 0.39 0.58 2.9
L1 Google n1-standard-8 Skylake VM 2.0 1.34 1.66 0.58 1.22 6.91
L2-nested-VM with 8 vCPU on Google n1-standard-8 Skylake VM 2.0 1.55 1.57 0.61 1.04 8.02
L1 Google n1-standard-8 Haswell VM 2.3 1.22 1.62 0.57 1.01 5.47
L2-nested-VM with 8 vCPU on Google n1-standard-8 Haswell VM 2.3 1.43 1.6 0.58 0.94 7.87

I then normalized the results for clock rate. I multiplied values from the table above by the ratio of CPU clock speeds, using the 3.7 GHz XEON as the baseline. In the table below. lower values are better.

Normalized Performance Clock ratio Blowfish Fibonacci N-Queens FPU FFT FBENCH
Intel XEON Skylake 4-core CPU (HW baseline) 1.00 1.05 1.07 0.39 0.63 2.58
L1 VM with 8 vCPU on Intel XEON Skylake 4-core CPU 1.00 1.1 1.04 0.39 0.58 2.9
L1 Google n1-standard-8 Skylake VM 0.54 0.72 0.90 0.31 0.66 3.74
L2-nested-VM with 8 vCPU on Google n1-standard-8 Skylake VM 0.54 0.84 0.85 0.33 0.56 4.34
L1 Google n1-standard-8 Haswell VM 0.62 0.76 1.01 0.35 0.63 3.40
L2-nested-VM with 8 vCPU on Google n1-standard-8 Haswell VM 0.62 0.89 0.99 0.36 0.58 4.89

I charted the normalized results. In the figure below, I see that the CPU performance of L2 virtual machines is very close to the performance of the L1 instance on Google Cloud. In fact, when normalized for clock rate, Google Cloud virtual machines outperform the local hardware (except in the FBENCH test).

Conclusion

I have shown how to enable nested virtualization on a Google Cloud VM instance. I am confident that it is possible to run complex network emulation scenarios using popular network emulation tools like GNS3, EVE-NG, and Cloonix — which all use KVM virtual machines to build network nodes in their emulations. And I am confident that the performance of these emulations on cloud virtual machines will be very close to the performance that would be experienced if running them on local hardware.

At this point, users should be able to install their favorite network emulator just like they would if they were using local hardware. Then they can access their tools either using Terminal, X-windows, a web interface, or the client-server user interface models provided by GNS3 and Cloonix.

UPDATE: EVE-NG does not install in Google Cloud. EVE-NG runs scripts that change the interface names and modifies the VM’s Linux kernel. At some point, EVE-NG is incompatible with Google VM startup scripts. I need to spend some time to investigate this. Cloonix works well on Google Cloud. I have not yet tried GNS3 on Google Cloud.

Network Labs Using Nested Virtualization in the Cloud

$
0
0

Many open-source network simulation and emulation tools use full virtualization technologies like VMware, QEMU/KVM, or VirtualBox. These technologies require hardware support for virtualization such as Intel’s VT-x and AMD’s AMD-V. To gain direct access to this hardware support, researchers usually run network emulation test beds on their own PCs or servers but could not take advantage of the inexpensive and flexible computing services offered by cloud providers like Amazon EC2, Google Compute Engine, or Microsoft Azure.

Creative Commons copyright: From http://d203algebra.wikispaces.com/Exponential+Functions-Target+D-Modeling+Data-Investigations

By August 2017, most of the major cloud service providers announced support for nested virtualization. In the cloud context, Nested Virtualization is an advanced feature aimed at enterprises, but it is also very useful for building network emulation test beds. I’ve written about nested virtualization for servers before but, until recently, I was limited to running nested virtual machines on my own PC. Now that the major cloud providers support nested virtualization, I can build more complex network emulation scenarios using cloud servers.

This post will discuss the cloud service providers that support nested virtualization and how this feature supports open source networking simulation and emulation in the cloud.

Cloud service providers support for nested virtualization

The cloud service providers I investigated when writing this post were Amazon EC2, Oracle Cloud IaaS, Google Compute Engine, and Microsoft Azure IaaS. I show the results of my survey in table, below. In every case where a cloud provider supports nested virtualization for Linux virtual machines, I used a free trial account to test how it works. In all supported cases, it worked well.

Cloud provider Nested virtualization Level of support
for Linux VMs
Free trial period Free trial limits
Amazon EC2 No N/A 1 year 8,760 CPU-hours
Oracle Cloud Yes Full support 30 days $300 worth of services.
8 vCPU
Google Compute Engine Yes In Beta 1 year $300 worth of services.
8 vCPU
Microsoft Azure IaaS Yes Unofficial,
but it works
30 days $250 worth of services.
4 vCPU

Amazon EC2 does not support nested virtualization in its cloud instances.

Oracle Cloud offers very robust support for nested virtualization. It also offers advanced networking features that make it easier to build complex network emulation scenarios. They even have features that support virtual labs for networking training and testing. Oracle offers a one-month free trial.

Google Cloud offers nested virtualization as a beta feature. You must execute an extra step to get an Google Cloud image that supports nested virtualization. Google Cloud also offers a very generous free trial period that lasts one year.

Microsoft Azure officially supports nested virtualization for cloud instances running Windows and unofficially supports nested virtualization for Linux instances. Azure offers a free trial period. I am using Microsoft Azure for some projects at my workplace so I am building more skills with Azure than with other providers. I’ll probably spend some more time discussing Azure in the future.

Evolution of network emulation

Until a few years ago, most networking was performed by dedicated hardware such as switches and routers. The networking hardware usually contained proprietary silicon that provided differentiating features and its software was tightly integrated with the hardware. It was expensive to build a test lab with this hardware and also to keep it updated.

Over the past decade, users learned how to run the software that comes bundled with networking hardware on low-cost servers or PCs. Skilled users could build test networks using emulation technologies such as Dynamips or QEMU, and virtualization technologies such as VirtualBox or VMware. Network emulation tools like GNS3, EVE-NG, Cloonix, and others simplified the setup and configuration of these virtual test networks. Usually, test lab networks would consist of multiple virtual nodes running on a single, powerful server but multiple servers could be connected together to build larger test networks.

As networks evolved, vendors started providing networking software that could run on standard servers to provide virtual network functions (VNF) and started offering software-defined networking (SDN) solutions. These new products are usually designed to run in virtual machines. Modern networks may consist of a combination of dedicated hardware and also standard servers running network functions.

Researchers are required to emulate larger and more complex networks as they study the operation of new networking technologies.

Network emulation in the Cloud

Researchers often cannot respond fast enough to new and changing technologies because they do not have access to powerful servers or they may have a limited number of servers. Individual researchers, like me, may find it difficult or too expensive to maintain a power-hungry servers, especially if they are not using them all the time.

When faced with problems like these, most organizations consider using cloud providers who can offer large servers on demand, rented per hour or per minute. However, until recently, cloud providers could not support complex network emulation labs.

It is not so simple to run network emulation labs in the cloud. Cloud providers usually require specific drivers to be installed in disk images that run on their hypervisors. We usually cannot modify vendors’ switch or router software images, and we may not be able to modify vendors’ VNF images so we cannot install the required drivers. In other cases, networking software may only support a specific hypervisor which may not be the same hypervisor used by the cloud provider but cloud service providers do not allow users to modify their servers or hypervisors.

Also, the network capabilities provided to cloud virtual machines may not support the types of network traffic that we would need to run between virtual nodes in a network emulation lab. For example, cloud service providers usually block multicast traffic.

Nested virtualization in the Cloud

In 2017, the major cloud service providers announced support for nested virtualization. A cloud instance that supports nested virtualization exposes the hardware acceleration features which are availabe in the base server hardware to the cloud virtual machine we are renting, which allows us to install our own hypervisor on it. We can then run multiple virtual machines “nested” inside the cloud instance.

Nested virtualization enables us to use a cloud instance as if it were a normal virtualization server so we can build virtual network emulation scenarios using a cloud instance instead of a local PC or server. We can run unmodified networking software and VNF images on any hypervisor we install and configure on the cloud instance. We can use any virtual networking technology we wish, such as Open vSwitch or Linux bridging, instead of being forced to use the cloud provider’s virtual networking technology.

Normal virtualization compared to nested virtualization

We can also take advantage of the benefits of running labs in the cloud. We can build very complex virtual networking scenarios on a single cloud instance and save the entire setup on a single disk image. Then we can start a new lab using that image in only a few minutes. When we are not using the lab, we can stop the cloud instace so we incur costs only when we are using the labs we create. We can run more labs by cloning a lab disk image and starting a new virtual machine from that clone.

What about Bare Metal Cloud?

Many cloud providers also offer bare metal, or dedicated, servers. Innovative companies like Packet.net provide bare metal servers and some of the major cloud providers such as Oracle and Amazon also provide bare metal as a service. These products eliminate the need for nested virtualization because users may install their own hypervisor on a bare metal server that same way they would if that server was located in their own data center.

I don’t recommend bare metal servers for network emulation research because normal cloud instances that support nested virtualization better fit the use-case of the individual researcher who needs maximum flexibility at the lowest possible cost.

Most individual researchers and enthusiasts work on network emulation scenarios in their spare time and need to be able to suspend their work — sometimes for a long time — and then return to it when they are available. If researchers de-allocate bare metal instances to avoid costs, they are then required to rebuild a new instance from scratch when they want to continue their research. Cloud instances based on virtual machines can be shut down when not needed and then started again quickly so researchers can continue to work from the point at which they stopped without having to rebuild the lab from scratch.

Bare metal servers continue to cost the user even if they are shut down, because the hardware resources are still allocated to the user. Normal cloud instances stop costing as soon as they are shut down and, if they support nested virtualization, they can perform the same workloads as bare metal servers.

Conclusion

Now that most major cloud providers support nested virtualization, it is possible to run complex network emulation labs using cloud resources. This can greatly increase the capability of independent open-source networking researchers.

Create a nested virtual machine in a Microsoft Azure Linux VM

$
0
0

Microsoft Azure unofficially supports nested virtualization using KVM on Linux virtual machines, which makes it possible to build network emulation scenarios in the cloud using the same technologies you would use if you were using your own PC or a local server.

In this post, I will show you how to set up a Linux virtual machine in Microsoft Azure and then create a nested virtual machine inside the Azure virtual machine. This is a simple example, but you may use the same procedure as a starting point to create more complex network emulation scenarios using nested virtualization.

Prerequisites

To follow this tutorial, you need an Azure account. Microsoft offers a free-trial period that provides up to $300 in credits for up to 30 days. Creating a free trial account is easy: follow the instructions at: https://azure.microsoft.com/free.

If you have not used MS Azure before, I recommend the free training offered on their web site. The first course you should take is the beginner-level Azure Administrator course, which demonstrates all the basic topics you will need to understands when managing virtual machines in Azure.

In this tutorial, I will use the Azure CLI to create and manage infrastructure in Azure, instead of using PowerShell or the Azure Portal. I find that the Azure CLI is easier to read than PowerShell. If you are using the GUI provided by Azure Portal, you can still follow along, using the CLI commands shown below as a guide. I use the Azure Cloud Shell to run my CLI commands.

Azure Shell and Azure CLI

The simplest way to start using Azure CLI is to use the web-based Azure Cloud Shell so you don’t need to install Azure CLI on your computer. If you prefer to install Azure CLI on your own PC, follow Microsoft’s documentation.

Log into Azure Cloud Shell by typing https://shell.azure.com into your browser navigation bar. Log in using your Microsoft account ID. In the upper left corner of the shell window, choose the Bash shell. This enables Azure CLI in the shell. It also supports other Linux-based applications like Ansible, Terraform, and Python so the Azure Cloud Shell is a powerful tool.

The Cloud Shell will disable itself after twenty minutes of inactivity but it saves your files in an attached storage account. So when you restart the Cloud Shell, your files and scripts will still be available.

Gather information

To create a virtual machine, you first need to understand a bit about the resources available. You need to tell Azure in which datacenter you want the VM created, so you need to get a list of all available datacenters. Then, you need to know which VM sizes are supported in the datacenter you chose, and which base images are available in the Azure Marketplace. Then you can use this information to build an Azure resource group and an Azure virtual machine.

To list all available datacenters, run the Azure CLI command:

$ az account list-locations

This produces a long list with many columns. To clean up the output, use some extra options with the command, as shown below:

$ az account list-locations \
  --query '[].{Location:displayName,Name:name}' \
  --output table

I use the –query option to choose specific columns of information displayed and use the –output option to format the information as a table. This will output a long list of locations. A sample of the output is shown below:

Location             Name
-------------------  ------------------
East Asia            eastasia
Southeast Asia       southeastasia
Central US           centralus
East US              eastus
East US 2            eastus2
West US              westus
North Central US     northcentralus 

Scan through the list until you find a datacenter you want to use. You may choose a datacenter that is closest to you, for example.

Then, list all the available VM sizes in the datacenter. Azure supports nested virtualization only on Dv3 and Ev3 virtual machine types. So we must check that they are supported in the datacenter we chose. If they are not, choose another datacenter.

To get the list of VM sizes supported in the datacenter, run the following command. In this example, I am using the eastus datacenter.

$ az vm list-sizes \
   --location eastus \
   --query '[].{Name:name,CPU:numberOfCores,Memory:memoryInMb}' \
   --output table | grep _v3

Above, I use the –query option to reduce the amount of information displayed and pipe the output through grep to filter only “_v3” sizes. A sample of the output is shown below.

Standard_D2s_v3             2      8192
Standard_D4s_v3             4     16384
Standard_D8s_v3             8     32768
Standard_D16s_v3           16     65536
Standard_D32s_v3           32    131072
Standard_D2_v3              2      8192
Standard_D4_v3              4     16384

Use either Ev3 or Dv3 machine types. In this example, I will use the Standard_D4s_v3 size, which offers 4 vCPUs and 16GB of memory.

Finally, list the images available from the Azure Marketplace at the datacenter you chose — in this case, eastus. As an example, look for a CentOS 7.5 image:

az vm image list \
  --location eastus \
  --offer centos \
  --sku 7.5 \
  --all \
  --query '[].urn' \
  --output table  

As before, I use the –query option to reduce the information displayed. We need the URN of the image. The output is:

Result
---------------------------------
OpenLogic:CentOS:7.5:7.5.20180522
OpenLogic:CentOS:7.5:7.5.20180529

The latest available CentOS 7.5 image in the East US datacenter is OpenLogic:CentOS:7.5:7.5.20180529. You may substitute the version number with the text “latest” when requesting an image so use the Image URN OpenLogic:CentOS:7.5:latest when creating the VM.

Create the virtual machine

To create resources in Azure you must first create a resource group, which is like a folder that will contain your resources. It organizes your infrastructure in Azure and sets the default data center location. In the Cloud Shell window, enter the Azure CLI command:

$ az group create \
  --name testRG \
  --location eastus

When creating the VM, enter the information you previously gathered and also choose the VM name and user name. Use the –storage-sku option to choose the lower-priced Standard_LRS disk type.

$ az vm create \
  --name Test1 \
  --resource-group testRG \
  --size Standard_D4S_v3 \
  --image OpenLogic:CentOS:7.5:latest \
  --generate-ssh-keys \
  --admin-username brian \
  --storage-sku Standard_LRS

Azure will use default values to create the virtual network and subnet, will create a security group, and will generate the SSH keys needed to log in to the VM.

The virtual machine takes a few minutes to start. When is starts, Azure CLI will output some information about it, as shown below:

{
  "fqdns": "",
  "id": "/subscriptions/abcfake3-8a56-43e0-ae4c-a38faked8dd7c/resourceGroups/testRG/providers/Microsoft.Compute/virtualMachines/Test1",
  "location": "eastus",
  "macAddress": "00-0D-3A-1C-79-CC",
  "powerState": "VM running",
  "privateIpAddress": "10.0.0.4",
  "publicIpAddress": "23.96.23.54",
  "resourceGroup": "testRG",
  "zones": ""
}

Look through the output. You are most interested in the virtual machine’s public IP address. In this case, it is 23.96.23.54. Make a note of the public IP address.

Log in to the Azure VM

You need the private SSH key to log in to the Azure VM. When we created the VM, Azure automatically created an SSH key pair for us and stored it on the Azure storage account connected to the Cloud Shell.

In the Cloud Shell, go to the .ssh directory and list the contents.

$ cd ~/.ssh
$ ls
id_rsa  id_rsa.pub

You see the private and public keys. You may copy the private key to your local PC if you want to run SSH locally — just list the contents of id_rsa and copy-and-paste it to a text file on your local PC. Or, you may connect to the Azure VM using the Cloud Shell. In this example, I am using the Cloud Shell.

Connect to the Azure VM using the SSH command. Use the SSH provate key and the username you specified when you created the VM.

$ ssh -i ~/.ssh/id_rsa brian@23.96.23.54

Now we have a terminal window connected to the remote virtual machine and we are ready to configure it.

Azure Network Security Groups

When you created your virtual machine, Azure also automatically created the resources that support that VM, such as storage disk, a network, and a network security group. The network security group is a set of software firewall rules applied to each VM associated with it. For Linux VMs, the default security group’s ingress rules block all incoming traffic except SSH traffic, and its outbound rules allow all outbound connections. This is reasonably good security, especially of we use SSH keys to authenticate user access to the VM.

If you need to run other applications on the virtual machine, you may need to create some more rules that allow other applications to connect to the virtual machine. For example: web browsers or remote desktop protocol. So if you have trouble configuring an application on your virtual machine or nested virtual machines, check the network security group rules.

First, get the name of the network security group that Azure created by default:

$ az network nsg list \
  --resource-group testRG \
  --output table

See there is only one network security group in the resource group, and it is named, Test1NSG.

Location    Name      ProvisioningState    ResourceGroup    ResourceGuid
----------  --------  -------------------  ---------------  ------------------------------------
eastus      Test1NSG  Succeeded            testRG           1571dbf1-b3ac-48fc-a386-5ed621700da3

Then list the rules for the network security group, Test1NSG.

$ az network nsg rule list \
  --nsg-name Test1NSG \
  --resource-group testRG \
  --output table

The output shows the firewall rules, as seen in the listing below:

Name               ResourceGroup      Priority  SourcePortRanges    SourceAddressPrefixes    SourceASG    Access    Protocol    Direction      DestinationPortRanges  DestinationAddressPrefixes    DestinationASG
-----------------  ---------------  ----------  ------------------  -----------------------  -----------  --------  ----------  -----------  -----------------------  ----------------------------  ----------------
default-allow-ssh  testRG                 1000  *                   *      None         Allow     Tcp         Inbound                           22  *                      None

We see that traffic is allowed between resources in the same virtual network in Azure and that inbound connections on port 22 are allowed. All other inbound ports are blocked. However, applications running in the Azure VM, or on a nested VM in the Azure VM, may initiate outbound connections.

Configure the Azure VM

Verify that the hardware support for virtualization is available in this virtual machine:

$ grep -cw vmx /proc/cpuinfo
4

We expect to see a value of “4” because we assigned four vCPUs to this VM.

Install software

Check for and install any updates:

$ sudo yum update

Install tmux on the Azure VM. If you are using Azure Cloud Shell to manage the virtual machine, it will disconnect after 20 minutes of inactivity and, sometimes, it hangs up for no reason. The tmux utility prevents interuption of processes started from your terminal session, if you are disconnected. Install and start tmux:

$ sudo yum install tmux
$ tmux

Install the kvm and libvirt packages needed to create the nested virtual machines.

$ sudo yum install qemu-kvm qemu-img virt-manager \
  libvirt libvirt-python libvirt-client virt-install \
  virt-viewer

Add your userid to the kvm and libvirt groups. In this case, the userid is brian.

$ sudo usermod -aG kvm,libvirt brian

Restart the libvirtd service

$ sudo service libvirtd restart

Create nested virtual machine

Now, create a virtual machine that will run on the Azure virtual machine. In this example, create an Ubuntu 18.04 nested virtual machine.

Create the disk image that the nested virtual machine will use.

$ mkdir Images
$ qemu-img create -f qcow2 \
  /home/brian/Images/ubuntu1804.qcow2 4G

Load the VM install image from an ubuntu mirror server. Get the list of mirrors from https://launchpad.net/ubuntu/+cdmirrors. I chose the Princeton mirror because it offers high bandwidth and is close to the US East Data Center.

$ virt-install \
  --name ubuntu1804 \
  --ram 1024 \
  --disk path=/home/brian/Images/ubuntu1804.qcow2,size=4 \
  --vcpus 1 \
  --os-type linux \
  --network bridge=virbr0 \
  --graphics none \
  --location 'http://mirror.math.princeton.edu/pub/ubuntu/dists/bionic/main/installer-amd64/' \
  --extra-args='console=ttyS0'

After the nested VM starts, you will see the text-based installer for Ubuntu 18.04. Follow the prompts, enter the requested information and choose the options you want.

In this example, I configured the nested VM with the server name nested, userid brian, and a password. When asked to select software, choose “OpenSSH Server” and “Basic Ubuntu Server” options.

NOTE: The nested VM Linux installation may take up to 10 minutes. If at some point during the install you stepped away from your computer for more than 20 minutes, the Cloud Shell will terminate due to lack of user activity. If this happens, restart the cloud shell, use SSH to log in to the VM and then run the tmux a command to reattach to the running tmux session. If you forgot the Azure VM’s public IP address, run the command az network public-ip list --resource-group testRG and look for the “ipAddress” field in the output.

Fix the serial interface

When the installation process ends, the nested virtual machine will reset itself. The Cloud Shell terminal will show a blank screen because it cannot access the nested VM’s serial interface.

To fix this, return to the Azure VM’s prompt by pressing the CTRL-[ key combination. Then, find the VM’s IP address with the command.

$ arp -an

Which output the VM’s ARP teable. The table should be very small, and we are only looking for IP addresses associated with the bridge named virbr0. For example:

$ arp -an
? (10.0.0.1) at 12:34:56:78:9a:bc [ether] on eth0
? (192.168.122.37) at 52:54:00:83:0f:81 [ether] on virbr0

We see the nested VM’s IP address on the bridge virbr0.

Login to the nested VM using the SSH command (In this example, I configured the userid brian on the nested VM):

$ ssh brian@192.168.122.37

Now, in the nested VM, Edit the file /etc/default/grub.

nested:$ sudo nano /etc/default/grub

Add the text console=ttyS0 in the GRUB_CMDLINE_LINUX_DEFAULT parameter as shown below. After updating it, the file should look like below:

GRUB_DEFAULT=0
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash console=ttyS0"
GRUB_CMDLINE_LINUX=""

Then update GRUB

nested:$ sudo update-grub

Now, exit the nested VM:

nested:$ exit

Restart the nested virtual machine:

$ virsh destroy ubunut1804
$ virsh start ubuntu1804

Connect to nested VM

Now, when you connect to the virtual machine’s console, the serial interface will function correctly.

$ virsh console ubuntu1804

You are connected to a virtual machine running inside another virtual machine on Azure. You have successfully implemented nested virtualization on Microsoft Azure.

Shut down

When your testing is complete, shut down your Azure VM to avoid additional costs. First, shut down the nested VM. If you are already connected to the nested VM’s console, disconnect from it by pressing CTRL-]. Then shut down the VM with the command:

$ virsh destroy ubuntu1804

Next, exit the Azure VM

$ exit

You are now back at the Azure CLI prompt on the Cloud Shell.

Shut down the Azure VM with the following command:

$ az vm deallocate \
  --resource-group testRG \
  --name Test1

Delete resource (optional)

If you are completely finished, you may delete all the resources you created simply by deleting the resource group. Deleting all resources deletes all the resources created by Azure to support the VM, which includes storage, the public IP address, and more. These resources cost very little so you do not need to worry about them using up your free trial account credits but, if you are completely done, you may delete them.

$ az group delete \
  --name testRG

Conclusion

In this post, I showed you how to set up your first virtual machine in Azure and discussed a little bit about the resources created by Azure to support that virtual machine. I encouraged you to view the free training available on the Mictrosoft Azure web site, which will give you a good overview of how to work in the Azure cloud environment.

I also showed you how you can configure your virtual machine after you get it started. I showed how you can run more virtual machines nested in this virtual machine because Azure supports nested virtualization.

One nested virtual machine by itself is not very interesting, except for performance analysis. Nested virtualization allows you to run multiple nested VMs on the same cloud instance and network them together using any Linux networking technology you wish to use, such as Linux bridging or Open vSwitch. I will discuss building complex network emulation scenarios in the cloud with nested virtualization in a future post.

Python: the seven simple things network engineers need to know

$
0
0

Are you like me? Are you a network engineer, or other professional, transitioning their skill set to include programming and automation? Does your programming experience experience come from a few programming courses you attended in college a long time ago? Then please read on because I created this Python guide for people like you and me.

In this guide, I explain the absolute minimum amount you need to learn about Python required to create useful programs. Follow this guide to get a very short, but functional, overview of Python programming in less than one hour.

When you begin using Python, there are a lot of topics you do not need to know so I omit them from this guide. However, I don’t want you to have to unlearn misconceptions later, when you become more experienced, so I include some Python concepts that other beginner guides might skip, such as the Python object model. This guide is “simple” but it is also “correct”.

Getting Started

In this guide, I will explore the seven fundamental topics you need to know to create useful programs almost immediately. These topics are:

  1. The Python object model simplified
  2. Defining objects
  3. Core types
  4. Statements
  5. Simple programs
  6. Modules
  7. User input

Of course, there is much more to learn. If you like to learn by doing, then this guide will get you started quickly and you can build your skills by writing Python programs that perform useful tasks.

There is no substitute for learning by doing. I recommend you also start a terminal window and run the Python interactive shell so you can type in commands as you follow this guide.

Install python

This guide is targeted at Windows users but is still applicable to any operating system. You can find instructions to install Python on any operating system in the Python documentation.

To install Python in Windows, download the 64-bit Windows installer for Python 3 from the Python install web page. Check the web page for the latest version.

Run the installer. Select “Add Python to Path” in the installer wizard.

Python Interactive Prompt

There are many ways to start and run Python programs in Windows. See the Windows Python FAQ for more information. While you are learning about Python’s basic building blocks, you will use the Python Interactive Prompt to run Python statements and explore the results. Later, you will run Python programs using the Python interpreter. In both cases, you will launch Python from the Windows Command Line, cmd.exe.

In Windows, start the Windows Command Line, cmd.exe. To start the interactive prompt, type python at the command prompt.

> python

You will see the interactive prompt, >>>.

>>>

To quit interactive mode, type exit() or the CTRL-Z key combination.

>>> exit()

You will find that the Python interactive prompt is a great tool for experimenting with Python concepts. It is useful for learning the basics but it is also useful for trying out complicated ideas when you get more experienced. You will use the Python interactive prompt often in your programming career.

The Python object model simplified

Everything in Python is an object.

Python is an object-oriented programming language, but you do not need to use its object-oriented features to write useful programs. You may start using Python as a procedural programming language, which is familiar to most people who have a little programming knowledge. While I focus on procedural programming methodologies, I will still use some terminology related to objects so that you have a good base from which you may expand your Python skills.

In Python, an object is just a thing stored in your computer’s memory. Objects are created by Python statements. After objects are created, Python keeps track of them until they are deleted. An object can be something simple like an integer, a sequence of values such as a string or list, or even executable code. There are many types of Python objects.

Python creates some objects by default when it starts up, such as its built-in functions. Python keeps track of any objects created by the programmer.

Python objects

When you start Python, it creates a number of objects in memory that you may list using the Python dir() function. For example:

> python
>>> dir()

This will return a list the Python objects currently available in memory, which are:

['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__']

Note that this is returned as a Python list, as indicated by the square brackets (more about lists later).

Create a new object. Define an integer object by writing a Python statement that creates an integer object, assigns the value of 10 to it, and points to it with the variable name a:

>>> a = 10

Call the integer object named by a. Python will return the result in the interactive prompt:

>>> a
10

List all objects available in memory, again. Look for the integer object a:

>>> dir()
['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__','a']

See that the object a is added to the end of the list of Python objects. It will remain until you quit the Python interactive session.

If, for some reason, you want to remove an object you created from memory, use the Python del statement. For example:

>>> del a

Now, when you run the dir() function again, the object a will not be in the list of objects it returns.

Getting help

You may use the help() function to see the built-in Python documentation about each object type. Call the name of the object (or the type, if you know it) and the Python help function will print the documentation. For example:

>>> help(a)

You asked for help about object a. Python knows object a is an integer so it showed you the help information for a Python int, or integer, object type. You would get the same output if you had called the help function using the object type int.

>>> help(int)

As you work with Python in the interactive prompt, you can use the dir() and help() functions to better understand Python

Defining objects

In Python, statements define an object simply by assigning it to a variable or using it in an expression.

One of the fundamental concepts in Python is that you do not need to declare the type of an object before you create it. Python infers the object type from the syntax you use to define it.

In the example below, a defines an integer object, b defines a floating-point object, c defines a string object, and d defines a list object, and in this example each element of the list is a string object.

>>> a = 10                  # An integer 
>>> b = 10.0                # A floating point 
>>> c = 'text'              # A string 
>>> d = ['t','e','x','t']   # A list (of strings)

See how the syntax defines the object type: different objects are created if a decimal point is used, if quotes are used, if brackets are used, and depending on the type of brackets used. I will explain each of the Python object types a little bit later in this guide.

Comments

Note also that the syntax for comments in Python is the hash character, #. Other ways to comment and document Python programs are available but, for sake of simplicity, I skipped them from this guide.

Variables point to objects

In each of the four examples above, you created an object and then pointed a variable to that object. This is fundamentally different from more traditional programming languages. The variable does not contain the value, the object does. The variable is just a name pointing to the object, so you can use the object in your program.

A variable may be re-assigned to another object, even if the object is a different type. You are not changing the value of the variable or the type of the variable because the variable has no value or type. Only the object has a value or a object type. The variable is just a name you use to point to any object. So, the following code will work in Python:

>>> a = 10
>>> a
10
>>> a = 'text'
>>> a
'text'

See that you can assign an integer object to variable a and, later, assign a string object to variable a. The original integer object that had a value of 10 is erased from memory when you reassign the variable a to a string object that has a value of ‘text’.

When you begin working with Python, I suggest you write your code to avoid mixing up object types with the same variable names, but you may see this behavior if you are working with code someone else has written.

Object methods

Each instance of a Python object has a value, but it also inherits a functionality from the core object type. Python’s creators built methods into each of the Python core object types and this built-in functionality is accessed by you, the programmer, using object methods. Object methods may evaluate or manipulate the value stored in the object and allow the object to interact with other objects or create new objects.

For example, number objects have mathematical methods built into them that support arithmetic and other numerical operations; string objects have methods to split, concatenate, or index items in the string.

The syntax for calling methods is object.method(arguments), adding the name of the method, separated by a period, after the object name and ending with closed parenthesis containing arguments.

For example, one (not recommended) way to add two integers together is to use the integer object’s __add__ method:

>>> a = 8
>>> a.__add__(2)
10

Above, you created an integer object with a value of 8 and pointed the variable a to it. Then you called the integer object pointed to by variable a and used its __add__ method to return a new object that has a value of 10. Note that you do not normally do addition this way in Python but the Python integer object’s __add__ method is the underlying code used by Python’s addition operator, +, and the sum() function when using them with integer objects.

Here is another example: create an integer object with a value of 100 and assign it to a variable named c.

>>> c = 100
>>> c
100

Then look at all the methods and objects associated with the integer object by using the dir() function:

>>> dir(c)

You get a long list of object methods. These were all defined by the creators of Python and are “built in” to the integer object. Other Python functions may use some of these methods to perform their tasks, but you don’t need to know all the details of how Python works “under the hood”. From this list, you see that one of the methods associated with the integer object c, is bit_length. Use help() to get more information about what this method does:

>>> help(c.bit_length)

See it returns the minimum number of bits required to represent the number in binary. For example, the number 100 is binary 1100100, which is seven bits. Verify this using the bit_length method that is built into the integer object c:

>>> c.bit_length()
7

In summary: every Python object also comes with built-in methods that are available when the object is created. You can see the methods and learn more about them using the dir() and help() functions.

Core object types

As I mentioned previously, everything in Python is an object. You need to learn about a few basic object types to get started with Python. There are more object types than those listed below but we’ll start with this list of the object types that network engineers will use most often.

  • Integer objects
  • Floating point objects
  • String objects
  • File objects
  • List objects
  • Program Unit objects

Numbers object types

Numbers objects are usually defined such as integers or floating points. Most programming languages, including Python, represent different types of numbers differently in memory. Python also supports complex numbers and special types that allow users to define fractions with numerators and denominators, and fixed-precision decimal numbers. The following code creates two integers and adds them together:

>>> a = 10
>>> b = 20
>>> a + b
30
>>> c = a + b
>>> c
30

String object types

Strings objects may be text strings or byte strings. The main difference is that text strings will be automatically encoded and decoded into readable text by Python, and binary strings will be left in their raw, machine readable, form. Byte strings are usually used to store media such as pictures or sounds.

Readable text strings are created with quotes as follows:

>>> z = 'text'
>>> z
'text'

File object types

Files are objects created by Python’s built-in open() function. Type help(open) at the interactive prompt for more information. Whether opening an existing file, or creating a new one, the open() function returns a file object which is assigned to a variable name so you can reference it later in your program. For example:

>>> myfile = open(myfile.txt, 'x')
<_io.TextIOWrapper name='myfile.txt' mode='w' encoding='cp1252'>
>>> myfile
<_io.TextIOWrapper name='myfile.txt' mode='w' encoding='cp1252'>

You may close a file using the file object’s close method. Remember, you can see all the methods available for the file object you created by typing dir(myfile).

>>> myfile.close()

List object types

When you were in school and you may have taken a course about data structures. Or, if you have experience working with computer languages like C or C++, you had to create your own data structures to manage data in your programs. You probably implemented a data structure called a list, which contained a series of elements in computer memory linked by pointers. You probably wrote code to create functions that allowed you to insert items in the list, remove items, find items by index, and more.

Well, forget all that because Python has done it for you. Python has built-in data structure objects like lists, dictionaries, tuples, and sets. The list is the most commonly used data structure so I will cover it in this guide. You can read about the other data structures in the Python help() function or in the Python documentation.

You create a list object in Python using square brackets around a list of objected separated by commas. For example:

>>> k = [1,3,5,7,9]

Above, you created a list of five integer objects.

Python lists are very flexible and may contain a mixture of object types. For example:

>>> k = [1, "fun", 3.14]

Above, the list object contains three objects: an integer object, a string object, and a floating-point object. Lists can also contain other list objects, which is knows as nested lists. For example:

>>> k = [[1,2,3],['a','b','c'],[7.15,8.26,9.33]]

Above, you created a list of three objects, each of which is a list of three other objects.

Individual items in a list can be evaluated using index numbers. For example:

>>> k
>>> [[1,2,3],['a','b','c'],[7.15,8.26,9.33]]
>>> k[0]
[1, 2, 3]
>>> k[1]
['a', 'b', 'c']
>>> k[1][0]
'a'

Lists can be concatenated, split, and manipulated in other ways using the list object’s built-in methods or Python’s functions and operators. Lists are a useful “general purpose” data structure and, in most programs, you will use lists to gather, organize, and manipulate sequential data.

Program Unit Types

Like any programming language, Python has programming statements and syntax used to build programs. In addition to that, Python defines some object types used as building blocks to create Python programs. These program unit object types are:

  • Operations
  • Functions
  • Modules
  • Classes
Operations

Operations are symbols used to modify other objects according to the methods supported by each object. Python contains operators to assign values, do arithmetic, make comparisons, and do logic. There are also operators that perform bitwise operations (for binary values), identity operations, and membership operations.

Below is a list of common operation types. Many more exist; check the Python documentation for more information.

  • assignment operators include =, and “+=”
  • arithmetic operators include, +,,*, and /
  • comparison operators include >, >=, ==, and !=
  • logic operators include “and”, “or”, and “not”
Functions

Functions are containers for blocks of code, referenced by a name, commonly used in procedural programming. They are a universal programming concept used by most programming languages and may also be called subroutines or procedures. Use functions in your programs to reduce redundancy and to organize you program code so it is easier for others to maintain.

Some functions are already built into Python, like the sum(), dir() and help() functions. Other functions may be created by programmers like you and included in programs.

The Python def statement defines function objects. The def statement syntax is: def function_name(argument1, argument2, etc): followed by statements that make up the function.

Here, I get ahead of myself a little bit because, to define a function, you need to show Python statements and syntax. For now, just know that Python uses leading spaces to group code into statements. Define a simple function in the Python interactive prompt:

>>> def fun(input):
...     print(input + ' is fun!')
...

Note that the interactive prompt changes from >>> to ... when Python understands that you will enter multi-line statements. This behavior is activated by the syntax of the statement and the indentation you use after that (see Python statements and syntax, below). Press return on an empty line to finish defining the function.

You can see that the object fun has been added to the list of objects Python is tracking:

>>> dir()

Call the function and input a string as an argument.

>>> fun('skiing')
skiing is fun!  

You will see the object addressed by the variable name fun in the list of objects returned by the dir() function. If you pass the function object into the dir() function, you will see all the methods associated with function objects, in general.

>>> dir(fun)

You can do a lot with functions and, until you get to advanced topics like object-oriented programming, functions will be the primary way you organize code in Python.

Modules

Modules are a way to organize code in Python and a way to extend Python’s features. I will cover modules when I discuss running our Python scripts from saved files. For now, know that a Python module is a file containing Python code that you can import into another script when you run it. Modules allow you to organize large projects into multiple files and also allow you to re-use modules created by other programmers. For example: Python’s built-in modules.

Classes

Classes are objects used in object-oriented programming. Use classes to create new objects or to customize existing objects. Since Python supports both functions and classes, you can use it as either a procedural programming language (with a slight object-oriented bias), or as an object-oriented programming language, or both at the same time. I focus on the basics of using Python as a procedural programming tool to keep things clear and simple so I ignore Python classes in this guide.

Mutability of objects

When programming in Python you will find that some objects are mutable and may be modified directly using object methods or functions, and other objects are immutable and cannot be modified directly. The sub-set of object types I am starting with will not challenge you to worry about whether they are mutable or immutable. As you get more experienced with Python and work on larger projects, you will work with more object types and become concernd with the way objects are handled when they are passed into functions as arguments. At that point, you need to know if they are mutable or immutable object types.

I think its obvious that the object types like integers, floating points, and strings are objects where the value “is what it is” and should not change as a result of an operation performed on that object. Also, the list object type is obviously a “data structure” with methods like “push” and “pop” which, if you are familiar with data structures, you expect will add or remove items from the list object, resulting in it being directly modified.

Python statements

A python program is composed of statements. Each statement contains expressions that create or modify objects. Python organizes the sytnax of statements — especially control statements like if statements, or for statements — by indenting lines using blanks or tabs.

Python statements are grouped into the following categories:

  • Assignment statements such as a = 100
  • Call statements that call objects and object methods. For example: fun('skiing') or c.bit_length()
  • Selecting statements such as if, else, and elif
  • Iteration statements such as for
  • Loop statements such as while, break, and continue
  • Function statements such as def

The list above is a good starting point for building Pyton programs.

Statement syntax

Python uses whitespace to group statements together and define the hierarchy of statements. Other languages might use brackets or semicolons to separate statements, but Python uses only blanks or tabs (Pick a side! Fight!) and newlines.

For example, a Python if statement would look like this in the interactive prompt:

>>> a = 10
>>> b = 20
>>> if a > b:
...     print('A is bigger')
... else:
...     print('A is NOT bigger')
...
A is NOT bigger 

White space is used to define which expressions are associated with which statement. For example, if you nest statements, you see how the indenting using whitespace helps you identify the groups of expressions in each statement:

>>> a = 10
>>> b = 20
>>> c = 3
>>> if a > b:
...     print(a)
...     for i in range(c):
...         a = a + 1
...     print(a)
... else:
...     print(b)
...     for i in range(c):
...         b = b + 2
...     print(b)
...
20
26

See how the if and else statements contain for statements and print statements which add up different numbers depending on the values of a, b, and c. The whitespace makes is easy for you to read the code, but it also makes it hard to find errors so, be careful to consistently indent your code.

Also note that in the code above, you used Python’s built-in range() function, which returns a list of integers from 0 (a default value if no start number is input) to one less than the input integer. The list of integers is used by the for iteration statement to iterate the required number of times.

Assignment statement syntax

Assignment statements create objects and name variables that point to the objects. They consist of the variable name, the = operator, and the value of the object to be created, written in syntax that identifies the type of object. For example:

a = 100
b = 3.14
c = 'stretch'
d = [3, 4, 'pine']

Call statement syntax

Call functions or object methods using call statements. The syntax consists of the function or object method name, followed by parenthesis that enclose the arguments to be passed to the function. For example:

fun('skiing')
c.bit_length()

Selecting statements syntax

Selecting statement allow the programmer to define operations that occur depending on the value of specific objects. The syntax involves colons, spaces, and newlines. Start with the if statement and expression to be tested, followed by a colon. On the next line, indent the text (I use 4 spaces) and add in the statement to run if the condition was true. Back out one indent (or 4 spaces) if you will add in elif statements of an else statement. The else and elif statements are followed by a colon, and the code to run in each of these statement is indented. For example:

if a == b:
    print('A is equal to B')
elif a > b: 
    print('A is greater then B')
elif a < b:
    print('A is less than B')
else:
    print('all other cases')

Iteration statement syntax

Iteration statements such as for require an iterator object such as a list through which it can iterate. The for statement ends with a colon and indent the code that will execute on each iteration below it. For example:

fruit = ['berries','apples','bananas', 'oranges']
for i in fruit:
    print(i)

The above statements would print out the following:

berries
apples
bananas
oranges

You can use the for statement to create loops by incorporating the range() function, as follows:

a = 0   
for i in range(100):
    a = a + 1
    print(a)

The above code prints a series of numbers from 1 to 100.

Technically, the for statement is not a loop, it is an iterator. In the last example above, it iterates through a list object containing 100 integers with values from 0 to 99, which was created by the range(100) function. Each iteration updates the value pointed to by the i variable until the statement iterates to the end of the list.

Loop statement syntax

Loop statements control how many times a section of code will run in a loop. The while statement ends with a colon and the next lines are indented. For example:

kk = 1
while kk < 100:
    print(kk)
    kk += 1

The above code prints a series of numbers from 1 to 100.

Additional control statements work with the while statement to break out of a loop if a condition is met or continue.

Function statement syntax

The def statement creates a function object in Python. The function object may be called using a call statement. The syntax of the def statement is: def followed by the name of the function, followed by the arguments expected by the function in parenthesis, followed by a colon. The code in the function is indented starting on the next line. You already saw examples of creating and calling a function above, but here is another example:

def test_func(number, string):
    print(number*string)

The above function should out put a string that is a concatenation of the input string repeated times the input number. You can test the function by calling it with input argument as shown below:

>>> test_func(3,'go')
gogogo

Simple Python Programs

Now you can stop the interactive prompt and start writing programs. A Python program is just a text file that contains Python statements. The program file name ends with the .py extension.

For example, use your favorite text editor to create a file called program.py on your Windows PC. The contents of the file should be:

a = 'Hello World'
print(a)

The simplest way to run a Python program is to run it using Python. For example, open the Windows Command Line Prompt, cmd.exe, and type the following:

> python program.py
Hello World

The above text will run the file program.py in the Python interpreter.

In Windows, you may also just call the program file at the command prompt. In Windows, the filename extension “.py” tells the operating system to use Python to run the program.

For example:

> program.py
Hello World

If you plan to also run your Python program on a Linux computer, start the program file with the following text:

#!/usr/bin/python3  

This shebang line is used at the start of most interpreted program files. Linux uses it to determine which programming language interpreter it needs to start to run the program. Windows ignores the line so you should just make a habit of including it, regardless which operating system you use.

Python Modules

You can create simple, or very complex, Python programs all in one file. But, as you get more experience using Python in network engineering, you will start breaking your programs up into separate files that can be maintained and tested separately.

To bring code from another file into your Python program at run time, use the import statement. Everything you import to your program is called a module, even though, on its own, it just looks like any other Python program. You will usually have one main program file with the basic logic of your program, and you may create other files, now called modules, that contain definitions for functions and other objects that your main program will call.

Python also comes with many built-in modules you can import to your program to access more functionality. Look at the Python socket and requests modules, for example. Also, many third-party developers create modules that you can install in Python and then import into your own programs. Some of these are especially useful to network engineers. For example, look at the napalm and paramiko modules.

Let’s experiment with creating a module. This module will simply define five objects using an example you used previously.

Open a text editor and create a Python program called mod01.py. Add the following text:

#!/usr/bin/python3
a = 10
b = 10.0
c = 'text'
d = ['t','e','x','t']
def fun(input):
    print(input + ' is fun!')

Save the file mod01.py.

Now, open the Python interactive prompt:

> python

Check the objects Python tracks in memory:

>>> dir()
['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__']

Now import the module you created:

>>> import mod01

Now check the objects tracked by Python again:

>>> dir()
['__annotations__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', 'mod01']

See that a new object has been created, called mod01. This object has methods that are objects contained within it; the objects you created in the mod01.py program. View them by running the dir() function:

>>> dir(mod01)
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'a', 'b', 'c', 'd', 'fun']

See that the module mod01 conatins the usual Python objects, plus the five objects you created. These objects were created because, when Python read the import statement, it ran the file mod01.py and the statements in that file created the objects.

To access these specific objects in the main program, call each object’s method by name using the syntax for calling object methods. For example:

>>> mod01.a
10
>>> mod01.d
['t', 'e', 'x', 't']
>>> mod01.fun('wrestling')
wrestling is fun!
>>>

Of course, you need to know what each of the module’s methods is so you can use it properly. If you are using a Python module or a third-party module, consult the module’s documentation to learn how to use all its methods.

Importing large modules can use up a lot of memory and you may only use a few specific methods from a module. There are ways to be more efficient but, for now, just import modules and don’t worry about memory usage. I am keeping this guide simple so I will not discuss importing specific objects from modules, or the concepts and issues related to Python namespaces. Just remember those are things you will want to learn later in your learning journey.

Get user input

Typically, your Python program will require some input from a user. This input can be input as arguments at the command line when the python program start, or it may be gathered by the program at run time by asking for input from a user. It may even be read in from a file.

To input arguments at the command line, you would need to explore some topics like the Python sys and argparse modules, how to parse arguments, how to test arguments before using them, and more. I’m choosing not to discuss that in this simple guide, but you can find some good information about parsing Python program command line arguments in the Python documentation.

You will have to learn to reading input from a file and write to a file in the near future. I skip that topic in this guide. Information about using Python to read input from a file is in the Python documentation.

I suggest that, while you are still learning the basics, use Python’s input() function to request and receive user input. This lets you prompt the user for input and then reads the first line the user types in response. It reads the input as a string, so you may need to convert it to another object type if that is what you require. For example, try the following at the Python interactive prompt:

>>> age = input('How old are you? ')
How old are you? 51
>>> age
'51'
>>> x = int(age)
>>> newage = x + 10
>>> print('you will be ' + str(newage) + ' in ten years')
you will be 61 in ten years
>>>

Conclusion

I have covered the elements of Python programming that represent the minimum knowledge a network engineer should have to get started writing useful scripts in Python. There is still more to know, such as learning about Python’s built-in networking modules and modules created by third parties, learning about the application programming interfaces supported by networking equipment from various vendors, and learning about network automation best practices.

After reading this guide, I hope you feel you are ready to start learning about these other topics, while using the Python programming language to interact with those technologies. You will learn more about Python as you experiment with it to develop network automation programs.

Viewing all 43 articles
Browse latest View live