Hurry! 20% Off Ends SoonRegister Now
  • Home
  • Blog
  • OpenStack
  • Installing and configuring OVS and API server for Neutron - OpenStack

Installing and configuring OVS and API server for Neutron - OpenStack

Installing and configuring OVS for Neutron

To create a Software Defined Network layer in OpenStack, we first need to install the software on our Network node. This node will utilize Open vSwitch as our switch that we can use and control when defining our networks when we use OpenStack. Open vSwitch, or OVS, is a production quality, multilayer switch. The following diagram shows the required nodes in our environment, which includes a Controller node, a Compute node, and a network node. For this section, we are configuring the Network node.

Installing and configuring OVS for Neutron

If you would like to become an OpenStack  Certified professional, then visit Mindmajix - A Global online training platform: " OpenStack Certification Training Course ". This course will help you to achieve excellence in this domain.

Getting ready

Ensure you are logged onto the Network node and that it has Internet access to allow us to install the required packages in our environment for running OVS and Neutron. If you created this node with Vagrant, you can execute the following:

vagrant ssh network

How to do it…

To configure our OpenStack Network node, carry out the following steps:

- When we started our Network node using Vagrant, we had to assign the fourth interface (eth3), an IP address. We no longer want an IP assigned, but we do require the interface to be online and listening for use with OVS and Neutron. Instead, we will use this IP address to assign to our bridge interface after we have created, in the later part of this section. Perform the following steps to remove this IP from our interface:

sudo ifconfig eth3 down
sudo ifconfig eth3 0.0.0.0 up 
sudo ip link eth3 promisc on

Tip

On a physical server running Ubuntu, we would configure this in our /etc/network/interfaces file as follows:

auto eth3
iface eth3 inet manual
up ip link set $IFACE up down 
ip link set $IFACE down

MindMajix Youtube Channel

- We then update the packages installed on the node.

sudo apt-get update 
sudo apt-get -y upgrade

Next, we install the kernel headers package as the installation will compile some new kernel modules.

sudo apt-get -y install linux-headers-'uname -r'

Now we need to install some supporting applications and utilities.

- We are now ready to install Open vSwitch.

Install Open vSwitch

- After this has installed and configured some kernel modules we can simply start our OVS service.

sudo service openvswitch-switch start

Now we will proceed to install the Neutron components that run on this node, which are the Quantum DHCP Agent, Quantum L3 Agent, the Quantum OVS Plugin, and the Quantum OVS Plugin Agent.

 Quantum DHCP Agent
- With the installation of the required packages complete, we can now configure our environment. To do this, we first configure our OVS switch service. We need to configure a bridge that we will call br-int. This is the integration bridge that glues our bridges together within our SDN environment.

sudo ovs-vsctl add-br br-int

Next, add an external bridge that is used on our external network. This will be used to route traffic to/from the outside of our environment and onto our SDN network

sudo ovs-vsctl add-br br-ex
sudo ovs-vsctl add-port br-ex eth3

We now assign the IP address, that was previously assigned to our eth3 interface, to this bridge:

sudo ifconfig br-ex 192.168.100.202 netmask 255.255.255.0

[ Related Article:- OpenStack Tutorial ]

Tip

This address is on the network that we will use for accessing instances within OpenStack. We assigned this range as 192.168.100.0/24 as described in the Vagrant file:

network_config.vm.network :hostonly, "192.168.0.202", :netmask => "255.255.255.0"

We need to ensure that we have IP forwarding on within our Network node.

 Field- Value

- Next, we will edit the Neutron configuration files. In a similar way to configuring other OpenStack services, the Neutron services have a configuration file and a paste ini file. The first file to edit will be the /etc/quantum/api-paste.ini to configure Keystone authentication.

We add the auth and admin lines to the [filter:authtoken] section:

[filter:authtoken]
paste.filter_factory =
keystoneclient.middleware.auth_token:filter_factory
auth_host = 172.16.0.200
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = quantum
admin_password = quantum

- After this, we edit two sections of the

/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini file.

The first is to configure the database credentials to point to our MySQL installation:

[DATABASE]
sql_connection = mysql://quantum:openstack@172.16.0.200/quantum

Further down the file there is a section called [OVS]. We need to edit this section to include the following values:

[OVS]
tenant_network_type = gre 
tunnel_id_ranges = 1:1000 
integration_bridge = br-int 
tunnel_bridge = br-tun 
local_ip = 172.16.0.202 
enable_tunneling = True

Save this file and then edit the /etc/quantum/metadata_agent.ini file as follows:

# Metadata Agent 
echo "[DEFAULT]
auth_url = https://172.16.0.200:35357/v2.0 
auth_region = RegionOne
admin_tenant_name = service 
admin_user = quantum 
admin_password = quantum 
metadata_proxy_shared_secret = foo 
nova_metadata_ip = 172.16.0.200 
nova_metadata_port = 8775

- Next, we must ensure that our Neutron server configuration is pointing at the right RabbitMQ in our environment. Edit /etc/quantum/quantum.conf and locate the following and edit to suit our environment

rabbit_host = 172.16.0.200

- We need to edit the familiar [keystone_authtoken] section located at the bottom of the file to match our Keystone environment:

[keystone_authtoken] 
auth_host = 172.16.0.200 
auth_port = 35357 
auth_protocol = http 
admin_tenant_name = service 
admin_user = quantum 
admin_password = quantum
signing_dir = /var/lib/quantum/keystone-signing

- The DHCP agent file, /etc/quantum/dhcp_agent.ini needs a value change to tell Neutron that we are using namespaces to separate our networking. Locate this and change this value (or insert the new line). This allows all of our networks in our SDN environment to have a unique namespace to operate in and allows us to have overlapping IP ranges within our OpenStack Networking environment:

use_namespaces = True

- With this done, we can proceed to edit the /etc/quantum/l3_agent.ini file to include these additional following values:

auth_url = https://172.16.0.200:35357/v2.0 
auth_region = RegionOne
admin_tenant_name = service 
admin_user = quantum 
admin_password = quantum 
metadata_ip = 172.16.0.200 
metadata_port = 8775 
use_namespaces = True

- With our environment and switch configured we can restart the relevant services to pick up the changes:

sudo service quantum-plugin-openvswitch-agent restart 
sudo service quantum-dhcp-agent restart
sudo service quantum-l3-agent restart
sudo service quantum-metadata-agent-restart

[ Related Article:- OpenStack Interview Questions ]

How it works…

We have completed and configured a new node in our environment that runs the software networking components of our SDN environment. This includes the OVS Switch service and various Neutron components that interact with this and OpenStack through the notion of plugins. While we have used Open vSwitch in our example, there are also many vendor plugins that include Nicira and Cisco UCS/Nexus among others.

The first thing we did was configure an interface on this switch node that would serve an external network. In Openstack Networking terms, this is called the Provider Network. Outside of a VirtualBox environment, this would be a publicly routable network that would allow access to the instances that get created within our SDN environment. This interface is created without an IP address so that our OpenStack environment can control this by bridging new networks to it.

A number of packages were installed on this Network node. The list of packages that we specify for installation (excluding dependencies) is as follows:

Operating System: linux-headers-'uname -r' Generic Networking Components:

vlan bridge-utils dnsmasq-base dnsmasq-utils

Open vSwitch:

openvswitch-switch openvswitch-datapath-dkms

Neutron:

quantum-dhcp-agent quantum-l3-agent quantum-plugin-openvswitch quantum-plugin-openvswitch-agent

Once we installed our application and service dependencies and started the services, we configured our environment by assigning a bridge that acts as the integration bridge that spans our instances with the rest of the network, as well as a bridge to our last interface on the Provider Network—where traffic flows from the outside into our instances.

A number of files were configured to connect to our OpenStack cloud using the Identity (Keystone) services.

An important configuration of how Neutron works with our OVS environment is achieved by editing the

/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini

 file. Here we describe our SDN environment:

[DATABASE]
sql_connection=mysql://quantum:openstack@172.16.0.200/quantum

Here we configure the Neutron services to use the database we have created in MySQL:

[OVS] tenant_network_type=gre

We’re configuring our networking type to be GRE (Generic Routing Encapsulation) tunnels. This allows our SDN environment to capture a wide range of protocols over the tunnels we create, as follows:

tunnel_id_ranges=1:1000

This is defining a range of tunnels that could exist in our environment where each will be assigned an ID from 1 to 1000 using following command:

network_vlan_ranges =

As we are using tunnel ranges, we explicitly unset the VLAN ranges within our environment:

integration_bridge=br-int

This is the name of the integration bridge:

tunnel_bridge=br-tun

This is the tunnel bridge name that will be present in our environment:

local_ip=172.16.0.202

This is the IP address of our Network node:

enable_tunneling=True

This informs Neutron that we will be using tunneling to provide our software defined networking.

The service that proxies metadata requests from instances within Neutron to our nova-api metadata service is the Metadata Agent. Configuration of this service is achieved with the /etc/quantum/metadata_agent.ini file and describes how this service connects to Keystone as well as providing a key for the service, as described in the metadata_proxy_shared_secret = foo line that matches the same random keywork that we will eventually configure in /etc/nova/nova.conf on our Controller node as follows:

quantum_metadata_proxy_shared_secret=foo

The step that defines the networking plumbing of our Provider Network (the external network) is achieved by creating another bridge on our node, and this time we assign it the physical interface that is connecting our Network node to the rest of our network or the Internet. In this case, we assign this external bridge, br -ex, to the interface eth3. This will allow us to create a floating IP Neutron network range, and it would accessible from our host machine running VirtualBox. On a physical server in a datacenter, this interface would be connected to the network that routes to the rest of our physical servers. The assignment of this network is described in the Creating an external Neutron network recipe.

Installing and configuring the Neutron API server

The Neutron Service provides an API for our services to access and define our software-defined networking. In our environment, we install the Neutron service on our Controller node. The following diagram describes the environment we are creating and the nodes that are involved. In this section, we are configuring the services that operate on our Controller node.

 Controller node

Getting ready

Ensure that you are logged on to the Controller node. If you created this node with Vagrant, you can access this with the following command:

vagrant ssh controller

How to achieve it…

To configure our OpenStack Controller node, carry out the following steps:

- First update the packages installed on the node.

sudo apt-get update sudo apt-get -y upgrade

We are now ready to install the Neutron service and the relevant OVS plugin.

sudo apt-get-y install quantum-server
quantum-plugin-openvswitch

- We can now configure the relevant configuration files for Neutron. The first configure Neutron to use Keystone. To do this we edit the /etc/quantum/api-paste.ini file.

[filter:authtoken]paste.filter_factory =keystone.middleware.auth_token:filter_factory auth_host = 172.16.0.200 auth_port = 35357 auth_protocol = http admin_tenant_name = service admin_user = quantum admin_password = quantum

- We then edit the

 /etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini 

file.
The first is to configure the database credentials to point to our MySQL installation:

[DATABASE]
sql_connection = mysql://quantum:openstack@172.16.0.200/quantum

- Next, find the section called [OVS]. We need to edit this section to include the following values:

[OVS]
tenant_network_type = gre 
tunnel_id_ranges = 1:1000 
integration_bridge = br-int 
tunnel_bridge = br-tun
# local_ip = # We don't set this on the Controller 
enable_tunneling = True

Then finally we ensure that there is a section called [SECURITYGROUP] that we use to tell Neutron which Security Group Firewall driver to utilize. This allows us to define Security groups in Neutron and using Nova commands:

[SECURITYGROUP]
Firewall driver for realizing quantum security group function
firewall_driver = quantum.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDr iver

We must ensure that our Neutron server configuration is pointing at the right RabbitMQ in our environment. Edit /etc/quantum/quantum.conf and locate the following and edit to suit our environment:

rabbit_host = 172.16.0.200

We need to edit the familiar [keystone_authtoken] located at the bottom of the file to match our Keystone environment:

[keystone_authtoken] 
auth_host = 172.16.0.200 
auth_port = 35357 
auth_protocol = http 
admin_tenant_name = service 
admin_user = quantum admin_password = quantum
signing_dir = /var/lib/quantum/keystone-signing

We can now configure the /etc/nova/nova.conf file to tell the OpenStack Compute components to utilize Neutron. Add the following lines under [Default] to our /etc/nova/nova.conf file:

#Networksettings 
network_api_class=nova.network.quantumv2.api.API 
quantum_url=https://172.16.0.200:9696/ 
quantum_auth_strategy=keystone 
quantum_admin_tenant_name=service 
quantum_admin_username=quantum 
quantum_admin_password=quantum 
quantum_admin_auth_url=https://172.16.0.200:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDr iver 
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfac eDriver 
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver 
service_quantum_metadata_proxy=true 
quantum_metadata_proxy_shared_secret=foo

Restart our Neutron services running on this node to pick up the changes:

sudo service quantum-server restart

Restart our Nova services running on this node to pick up the changes in the /etc/nova/nova.conf file.

ls /etc/init/nova-* | cut -d '/' -f4 | cut -d '.' -f1 | while read S; do sudo stop $S; sudo start $S; done
Explore OpenStack Sample Resumes! Download & Edit, Get Noticed by Top Employers!  Download Now!

 

How it works…

Configuring our Neutron service on the Controller node is very straightforward. We install a couple of extra packages:

Neutron:

quantum-server quantum-plugin-
openvswitch-agent

Once installed, we utilize the same

/etc/quantum/plugins/openvswitch/ovs_quantum_plugin.ini 

file with only one difference—the local_ip setting is omitted on the server—it is only used on agent nodes (Compute and Network).

Lastly, we configure /etc/nova/nova.conf—the all important configuration file for our OpenStack Compute services.

network_api_class=nova.network.quantumv2.api.API

Tells our OpenStack Compute service to use Neutron networking.

quantum_url=https://172.16.0.200:9696/

This is address of our Neutron Server API (running on our Controller node).

quantum_auth_strategy=keystone

This tells Neutron to utilize the OpenStack Identity and Authentication service, Keystone:

quantum_admin_tenant_name=service

The name of the service tenant in Keystone.

quantum_admin_username=quantum

The username that Neutron uses to authenticate with in Keystone

quantum_admin_password=quantum

The password that Neutron uses to authenticate with in Keystone.

quantum_admin_auth_url=https://172.16.0.200:35357/v2.0

The address of our Keystone service.

libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridge Driver

This tells Libvirt to use the OVS Bridge driver.

linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDri ver

This is the driver used to create Ethernet devices on our Linux hosts.

firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

This is the driver to use when managing the firewalls.

service_quantum_metadata_proxy=true

This allows us to utilize the meta-data proxy service that passes requests from Neutron to the Nova-API service.

quantum_metadata_proxy_shared_secret=foo

In order to utilize the proxy service, we set a random key, in this case too, that must match on all nodes running this service to ensure a level of security when passing proxy requests.
https://docs.openstack.org/juno/install-guide/install/apt/content/neutron-controller-node.html

 

Job Support Program

Online Work Support for your on-job roles.

jobservice

Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:

  • Pay Per Hour
  • Pay Per Week
  • Monthly
Learn MoreGet Job Support
Course Schedule
NameDates
OpenStack TrainingDec 24 to Jan 08View Details
OpenStack TrainingDec 28 to Jan 12View Details
OpenStack TrainingDec 31 to Jan 15View Details
OpenStack TrainingJan 04 to Jan 19View Details
Last updated: 07 Oct 2024
About Author

Ravindra Savaram is a Technical Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.

read less
  1. Share:
OpenStack Articles