Hurry! 20% Off Ends SoonRegister Now

OpenStack Networking

OpenStack is a collection of software tools that help to build and manage cloud computing platforms for storage, compute, and networking resources, especially for private and public clouds. It can be accessed using the OpenStack dashboard or OpenStack API. OpenStack Networking offers APIs for networking resources such as a switch, router, port, interface, etc.

OpenStack Networking was initially developed as a part of OpenStack Nova as Nova Networking then networking became an area of interest for many and it was taken up as a separate project called OpenStack Networking or Neutron.

OpenStack Networking Tutorial

In this OpenStack Networking Tutorial, you will learn below things
  1. How is OpenStack Networking Managed?
  2. OpenStack Neutron Network
  3. Installing OpenStack Networking
  4. OpenStack Networking Architecture
  5. Security Groups of OpenStack
  6. Open vSwitch
  7. Network Back Ends in OpenStack
  8. Tenant and Provider Networks

How is OpenStack Networking Managed?

In order to have a smooth integration with OpenStack Networking, the networking infrastructure should accommodate plugins for OpenStack Neutron. It is up to the developer to decide which network construct to be deployed such as, router, switch, interface, or port. The construct can be either physical or virtual, can be a part of the traditional network environment or SDN (Software Defined Networking) environment. The one and only need is that the vendor should have supporting infrastructures for neutron API. The creation and management of the network constructs (router, switch etc) can be done using OpenStack Dashboard

Below image shows the 4 networks in OpenStack architecture. Management network which handles internal combination between OpenStack components. Data Network handles the Virtual Machine communication inside cloud deployment. External Network connects the Virtual Machines with Internet thereby providing Internet access to the cloud deployment. API Network offers all the APIs including OpenStack Networking API to the tenants.

Related Article: Object Storage in OpenStack

OpenStack Neutron Network

OpenStack Neutron is a part of an SDN networking project that is developed for offering Networking-as-a-Service (NaaS) in a virtual environment. Neutrons are developed in order to overcome the issues such as poor control over tenants in a multi-tenant environment, address deficiencies, etc in the previous API called Quantum. The neutron is designed in such a way that it provides easy plugin mechanisms which enable the operators to access different technologies through the APIs. 

In OpenStack Neutron Network, tenants can create multiple private networks and control their IP Addresses. With this extension, the organizations can have more control over the security policies, monitoring, troubleshooting, Quality of Service, firewall etc. The Neutron extension includes IP address management, support for layer 2 networking, and extension for layer 3 router construct. The OpenStack Networking team is actively working on the improvements and enhancements for Neutron to release it with Havana and IceHouse.

If you would like to build your career with an OpenStack certified professional, then visit Mindmajix - A Global online training platform: “OpenStack Online Certification Training” Course. This course will help you to achieve excellence in this domain.

Installing OpenStack Networking 

There are prerequisites such as creating a database, API Endpoints, and service credentials for installing OpenStack Networking.

Steps to Create Database:

  • Connect to the database server using the database access as a root user
#mysql
  • Using the following command creates a neutron database.
MariaDB [(none)] CREATE DATABASE neutron;
  • Grant access privileges to the created neutron database replacing NEUTRON_DBPASS with a password.
MariaDB  [(none)] GRANT ALL PRIVILEGES ON neutron.*  TO ‘neutron’@’localhost’ IDENTIFIED BY ‘NEUTRON_DBPASS’

MariaDB  [(none)] GRANT ALL PRIVILEGES ON neutron.*  TO ‘neutron’@’%’ IDENTIFIED BY ‘NEUTRON_DBPASS’
  • After completing, exit the database access client.
  • To access the admin-only CLI commands, source the admin credentials
$ .admin-openrc
  • Follow these steps to create service credentials,
  • First, create a new neutron user using the following command,
$ openstack user create --domain default --password-prompt neutron

 

Related Article: Creating a sandbox Network server for Neutron with VirtualBox

     

  • Make the new neutron user as admin
OpenStack role add --project service --user neutron admin
  • Add neutron Service Entity using the following command,
Openstack service create --name neutron --description 
“OpenStack Networking”    network
  • Create new Networking Service API points for public, internal and admin using the following commands, 
$ Openstack endpoint create --region RegionOne  network public http://controller:9696
$ Openstack endpoint create --region RegionOne  network internal http://controller:9696
$ Openstack endpoint create --region RegionOne  network admin http://controller:9696

     

Configure Networking Options:

Networking service can be deployed using 2 architecture options,

  • A simple option that supports only attaching instances to the external or provider networks. This doesn’t support private network constructs (router, switch etc) only the privileged members and admin can manage the networks.
  • It adds layer-3 services on top of option 1 where it supports self-service (private) networks. Any user, including the unprivileged, can manage the networks.
Related Article: OpenStack Network Troubleshooting

Configure Metadata Agent:

Metadata Agent provides the credentials for the attached instances. To configure follow the below step: 

Open /etc/neutron/metadata_agent.ini in edit mode and in [DEFAULT] section configure the nova_metadata_host and shared_secret

[DEFAULT]
#...
Nova_metadata_host = controller
Metadata_proxy_shared_secret = METADATA_SECRET

     

Related Article: OpenStack Dashboard to Launch Instances

Configure Compute Service

To use the networking service, compute service has to be configured. Follow the steps to configure compute service,

  • Open /etc/nova/nova.conf file in edit mode, in [neutron] section configure the meta_proxy_shared_secret, enable service_meta_proxy and configure access parameters. 
  • In place of the default password NEUTRON_PASS you can give your own password.
  • In place of META_DATA_SECRET, you can give the metadata_secret you chose in the metadata agent. 

MindMajix YouTube Channel

Finalize Installation 

There would be symbolic link to Networking service initialization script /etc/neutron/plugin.ini pointing ML2 configuration file /etc/neutron/plugins/ml2/ml2_conf.ini. if you can’t find it create using the command,

# ln -s /etc/neutron/plugins/m12/m12 conf.ini/etc/neutron/plugin.ini
  • Populate database using the following command,
# su -s /bin/sh -c “neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron
  • To reflect changes restart  Compute API Service using the command,
# systemct1 restart openstack-nova-api.service
  • Configure the networking services in such a way that it starts once the system boots
 #systemct1 enable neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service  Neutron-metadata-agent.service
#systemct1 start neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service  Neutron-metadata-agent.service

OpenStack Networking Architecture

OpenStack Networking is a standalone service that deploys many other processes across nodes that interact with each other. The architecture has a dedicated node to perform DHCP and L3 routing. To run the Open vswitch there are two compute nodes each having one physical network card. One of the compute nodes is for managing tenant traffic and another one for managing connectivity. 

The below diagram shows the OpenStack Networking Architecture with two Compute nodes and one Network that are connected to a Physical router.

Security Groups of OpenStack

A security group is a container object with a set of security rules. It acts as a virtual firewall for other resources and servers on the same network. The rules and security groups filter the type and control the direction of the traffic sent to and received from neutron port providing an extra layer of security. 

One security group created can manage traffic to multiple compute instances. A security group is also linked with the ports created for LBaaS, Floating IP Addresses, and other instances. If the name of the security group is not mentioned the ports are associated with the default security group. New security groups can be created to have new properties or modifications can be done to a default security group to change their behavior. A new security group can be created using the following command, 

Openstack security group create
[--description <description>]
[--project <project> [--project-domain <project-domain>]]
<name> 

Open vSwitch

Open vSwitch is a virtual switch that helps in virtualizing the networking layer. It allows more number virtual machines that run on one or more physical nodes. The virtual machines are connected to virtual ports present on the virtual bridges. The virtual bridge allows the virtual machines to communicate with each other also allows them to connect with physical machines outside the node. With the help of layer 2 features (LACP, STP, and 802.1Q) the open vSwitches are integrated with physical switches. 

  • To view, all the OVS in the bridge use the command, 
ovs-vsctl list-br
  • To view, all ports on the particular bridge use the command,
ovs-vsctl list-ports br-int
  • To view, all ports along with their port numbers on the particular bridge use the command,
  • ovs-ofctl show br-int

Modular Layer (ML2)

In order to enable OpenStack Networking in utilizing various layer 2 networking technologies from the real-world data centers, the Modular Layer 2 (ml2) plugin framework has been designed. At present, it is operated with existing technologies such as open vswitch, hyperv L2 agents, and Linux bridge. ML2 has replaced the monolithic plugins from L2 layers. It also simplifies the support for upcoming new L2 technologies to reduce the efforts to add new monolithic plugins. 

  • The reasoning behind ML2: In the previous deployments of OpenStack Networking, only the plugins that were selected while implementation could be used. For example, if open vswitch is selected and running then at the same time some other plugins like Linux bridge cannot be selected. Simultaneous running of plugins was not possible. This affected heterogeneous environments mostly hence ML2 was designed. 
  • ML2 network types: In the ML2 framework multiple network segments can be operated in parallel. On top of that, these network segments can interconnect among themselves with the help of multi-segmented network support from ML2. It is not necessary to bind the ports to the segment. They are automatically bounded with segments along with connectivity. ML2 supports GRE, flat, local, VLAN, VXLAN network segments depending on the Mechanism driver. To add various Type drivers in ML2 use the command, 
[ml2]
Type_drivers = local, GRE, flat, VLAN, VXLAN
  • ML2 Mechanism Drivers: With a common code base the plugins have been redeployed as mechanisms that help in eliminating the complexity around code testing and maintenance and also enable code reuse. To add various mechanisms use the command,
[ml2]
mechanism _drivers = openvswitch, linuxbridge, 12population 

 

Related Article: Configuring Ubuntu Cloud

Network Back Ends in OpenStack 

OpenStack platform offers two different networking backends they are - Nova Networking and OpenStack Networking. Nova networking has become obsolete after the arrival of OpenStack networking but still available in the market. Presently there are no migration paths available from Nova to OpenStack Networking. Any attempt to migrate from one technology to another must be performed manually with many outages.

Choose Nova Networking,

  • If there is a requirement for flat or VLAN Networking: this includes scalability, provisioning and management requirements. Specific configuration is necessary in the physical networks to trunk the VLANs between nodes.
  • If there is no need for overlapping IP Addresses among tenants: This is suitable only for private and small deployments.
  • If there is no requirement for interaction with physical network and Software Defined Solution (SDN).
  • If there is no requirement for Firewall, Self-service VPN or load-balancing services.
Related Article: Getting Started with OpenStack
  • If there is a requirement for an overlay network solution: OpenStack networking helps in tunneling virtual machine traffic isolation as it supports GRE and VLAN whereas Nova networking does not support the isolation of traffic in virtual machines. 
  • If there is a requirement for overlapping IP Addresses among tenants: OpenStack Networking incorporates a namespace procedure used in Linux Kernel which does not restrict tenants from using the same subnet range deployed on the same compute node without any overlapping issues. This is more helpful in multi-tenancy situations. Such namespace capability does not exist in Nova Networking.
  • If there is a requirement for certified third-party OpenStack Networking Plug-in: ML2  is the default plug-in in OpenStack Networking along with OVS (Open vSwitch). Since OpenStack Networking has a pluggable architecture, based on other networking requirements, third-party plugins can also be deployed in place of ML2.
  • If there is a requirement for Firewall-as-a-Service (FaaS) or Load-Balancing-as-a-Service (LBaaS): Without the intervention of an administrator, OpenStack Dashboard allows the tenants to manage Faas and LBaaS services and this is available only in OpenStack Networking not in Nova Networking.

OpenStack Networking Services: In the initial phases it is important to make sure that there is proper expertise in the area to help in designing the physical networking infrastructure and to initiate appropriate auditing mechanisms and security controls. The components involved in OpenStack Networking Services are as follows: 

  • L3 Agent: L3 Agent comes along with an OpenStack-neutron package. Network namespaces in every project will have their own dedicated layer 3 routers helping them in directing network traffics and providing gateway services to networks in layer 2. L3 agents help in managing these routers. The L3 Agent-hosted node must have a range of IP addresses that are connected to an external interface instead of manually configured IP Addresses connected to the external interface. The range must be large enough for selecting a unique IP Address for every router.
  • DHCP Agent: Each project subnet will have a network namespace to act as a DHCP server. DHCP Agent manages these network namespaces. Each namespace is running a process called ‘dnsmasq’ which is capable of assigning IP Addresses to virtual machines that are running on a network. By default, the subnet is DHCP enabled at the time of subnet creation.
  • Open vSwitch Agent: This agent makes use of the OVS virtual switch to form instances along with the bridges. The OVS neutron plugin has its own agent to manage the instances in each node and the OVS bridges. 
Related Article: Openstack Block Storage

Tenant and Provider Networks

Tenant and Provider networks are part of the compute node in OpenStack Networking where tenant networks are created by users and provider networks are created by administrators of OpenStack. This section explains briefly about tenant and provider networks:

  • Tenant networks: To have proper connectivity between the projects, users would create tenant networks. They are meant to be used only within the project and can’t be shared with other projects and by default, they are completely isolated. The following are the types of tenant networks supported by OpenStack Networking:
    • Flat: In flat tenant networks all the created instances will be in the same network which can be shared with the hosts if needed. It does not support VLAN tagging and other network segregations.
    • VLAN: Using VLAN IDs users can create any number of tenant or provider networks corresponding to VLANs in the same physical network. This makes the communication between the instances across the environment easy and effective. It also helps instances in communicating with firewalls, external networks, load balancers on the same layer 2 VLAN.
    • VLAN and GRE Tunnels: This type of tenant network uses network overlays to conduct private communications among the instances. A router is required to manage traffic outside VLAN and GRE Tunnels and also for connect the instances with external networks using floating-point IPs. 
  • Provider networks: OpenStack administrators create provider networks and map them to an external physical network in a data center. Network types that can be used as provider networks are flat and VLAN. As a part of network creation, provider networks can be shared with tenant networks. The following steps explain how to create a Flat Provider Network and configurations to connect with the external network.
    • Flat Provider Networks: These networks are used to connect instances directly to the external network. A flat provider network can be used when there are more than one physical network and individual physical interfaces, and connect every compute and network node to the external nodes. 
    • Configure controller nodes: To configure the controller nodes follow the following steps:
      • Open /etc/neutron/plugin.ini in edit mode and add flat to the already existing list of type_drivers and initialize flat_networks to *.
Type_drivers =vxlan, flat Flat_networks = *
      • Create an external network that should be a flat network and map it with already configured physical_network. If it is configured as a shared network, other users can also create instances that are directly connected to it.
neutron net-create public01 --provider:network_type flat

--provider:physical_network physnet1 

--router:external=True

--shared
      • Using the command neutron subnet-create or from the dashboard, create a subnet.
# neutron subnet-create -- name public_subnet

--enable dhcp=False --allocation_pool

Start=192.168.100.20, end=192.168.100.100

--gateway=192.168.100.1 public01 192.168.100.0/24
      • Restart neutron-server for the changes to be reflected.
systemtclrestart neutron-server

 

Configure the Compute and Network Nodes

  • Create an external network bridge in /etc/sysconfig/network-scripts/ifcfg-br-ex and associate a port with it. After completion, reboot the network service to apply the changes. 
  • To create an external bridge:
DEVICE = br-ex

TYPE = OVSBridge

DEVICETYPE = ovs

ONBOOT = yes

NM_CONTROLLED = no
    • To configure eth1 port
DEVICE = eth1

TYPE = OVSPort

DEVICETYPE = ovs

ONBOOT = yes

NM_CONTROLLED = no
  • Map the bridges to physical networks after configuring them in /etc/neutron/plugins/ml2/openvswitch_agent.ini
Bridge_mappings = physnet1:br-ex
  • Restart neutron-openvswitch-agent on both compute as well as network nodes.
systemctlrestart neutron-open-agent

Configure Network Node

  • In /etc/neutron/l3_agent.ini, set external_network_bridge to empty so that so that it allows multiple external network bridges
external_network_bridge = 
  • To reflect the changes restart neutron-l3-agent
Systemctl restart neutron-l3-agent

Layer 2 and layer 3 networking

While designing virtual networks one should predict beforehand where maximum network traffic would be present. The traffic within the same logical networks would be faster than the traffic between different logical networks which is because the traffic passing between the logical networks should pass through a router which imposes network latency.

Use switching where possible

Usually, the switching happens in layer 2, the lower level of the network. Hence layer 2 can function quicker than layer 3 where the routing happens. The hops between systems that communicate often should be as low as possible. An encapsulation tunnel such as GRE or VXLAN can be used to make instances in different nodes to communicate with each other. The MTU size can be adjusted to accommodate the extra bits for the tunnel header else it would affect the results of fragmentation. 

Conclusion

OpenStack Networking enables users to build consistent and effective network topologies programmatically. The neutron component in the OpenStack Networking, with its pluggable open-source architecture, allows users to develop their own plugins and drivers that can interact with other physical and network devices to bring add on functionalities to the cloud.

Job Support Program

Online Work Support for your on-job roles.

jobservice

Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:

  • Pay Per Hour
  • Pay Per Week
  • Monthly
Learn MoreGet Job Support
Course Schedule
NameDates
OpenStack TrainingDec 24 to Jan 08View Details
OpenStack TrainingDec 28 to Jan 12View Details
OpenStack TrainingDec 31 to Jan 15View Details
OpenStack TrainingJan 04 to Jan 19View Details
Last updated: 03 Apr 2023
About Author

Ravindra Savaram is a Technical Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.

read less
  1. Share:
OpenStack Articles