OpenStack has been designed for highly scalable environments where it is possible to avoid single point of failures (SPOFs), but you must build this into your own environment. For example, Keystone is a central service underpinning your entire OpenStack environment, so you would build multiple instances into your environment. GLANCE is another service that is a key to the running of your OpenStack environment.
Corosync and Pacemaker are popular high-availability utilities that allow you to configure Cloudera Manager to fail over automatically. By setting up multiple instances running glance services, controlled with Pacemaker and Corosync, we can enjoy an increase in resilience to failure of the nodes running these services.
If you would like to become an OpenStack Certified professional, then visit Mindmajix - A Global online training platform:" OpenStack Certification Training Course ". This course will help you to achieve excellence in this domain.
We must first create two servers configured appropriately for use with OpenStack. As these two servers will just be running Keystone and Glance, only a single network interface and address on the network that our OpenStack services communicate, will be required. This interface can be bonded for added resilience.
The first, controller1, will have a host management address of 172.16.0.111.
The second, controller2, will have a host management address of 172.16.0.112.
To install Pacemaker and Corosync on these two servers that will be running OpenStack services such as Keystone and Glance, carry out the following:
sudo apt-get update sudo apt-get -y install pacemaker corosync
172.16.0.111 controller1.book controller1 172.16.0.112 controller2.book controller2
interface {
# The following values need to be set based on your environment
ringnumber: 0
bindnetaddr: 172.16.0.0
mcastaddr: 226.94.1.1
mcastport: 5405
}
Corosync uses multi-cast. Ensure that the values don’t conflict with any other multi-cast-enabled services on your network.
sudo sed -i 's/^START=no/START=yes/g' /etc/default/corosync
sudo corosync-keygen
If you are using an SSH session, rather than a console connection, you won’t be able to generate the entropy using a keyboard. To do this remotely, launch a new SSH session, and in that new session, while the corosync-keygen command is waiting for entropy, run the following:
while /bin/true; do dd if=/dev/urandom of=/tmp/100 bs=1024 count=100000; for i in {1..10}; do cp /tmp/100 /tmp/tmp_$i_$RANDOM; done; rm -f /tmp/tmp_* /tmp/100; done
sudo apt-get update sudo apt-get install pacemaker corosync
172.16.0.111 controller1.book controller1 172.16.0.112 controller2.book controller2
sudo sed -i 's/^START=no/START=yes/g' /etc/default/corosync
With the /etc/corosync/corosync.conf file modified and the /etc/corosync/authkey file generated, we copy this to the other node (or nodes) in our cluster, as follows:
jbyc
We can now put the same corosync.conf file as used by our first node and the generated authkey file into /etc/corosync:
sudo mv corosync.conf authkey /etc/corosync
[ Related Article:- Tutorial On OpenStack ]
sudo service pacemaker start sudo service corosync start
sudo crm_mon -1
============
Last updated: Sat Aug 24 21:07:05 2013
Last change: Sat Aug 24 21:06:10 2013 via crmd on controller1 Stack: openais
Current DC: controller1 - partition with quorum Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c 2 Nodes configured, 2 expected votes
0 Resources configured.
============ Online: [ controller1 controller2 ] First node (controller1)
sudo crm_verify -L
sudo crm configure property stonith-enabled=false
sudo crm_verify -L
sudo crm configure property no-quorum-policy=ignore
sudo crm configure primitive FloatingIP ocf:heartbeat:IPaddr2 params ip=172.16.0.253 cidr_netmask=32 op monitor interval=5s
sudo crm_mon -1
This outputs something similar to the following example, which now says we have 1 resource configured for this setup (our FloatingIP):
============
Last updated: Sat Aug 24 21:23:07 2013
Last change: Sat Aug 24 21:06:10 2013 via crmd on controller1 Stack: openais
Current DC: controller1 - partition with quorum Version: 1.1.6-9971ebba4494012a93c03b40a2c58ec0eb60f50c 2 Nodes configured, 2 expected votes
1 Resources configured.
============ Online: [ controller1 controller2 ] FloatingIP (ocf::heartbeat:IPaddr2): Started controller1
Making OpenStack services highly available is a complex subject, and there are a number of ways to achieve this. Using Pacemaker and Corosync is a very good solution to this problem. It allows us to configure a floating IP address assigned to the cluster that will attach itself to the appropriate node (using Corosync), as well as control services using agents, so that the cluster manager can start and stop services as required, to provide a highly available experience to the end-user.
By installing both Keystone and Glance onto two nodes (each configured appropriately with a remote database backend such as MySQL and Galera), having the images available using a shared filesystem or Cloud storage solution means we can configure these services with Pacemaker to allow Pacemaker to monitor these services. If unavailable on the active node, Pacemaker can start those services on the passive node.
Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:
Name | Dates | |
---|---|---|
OpenStack Training | Dec 24 to Jan 08 | View Details |
OpenStack Training | Dec 28 to Jan 12 | View Details |
OpenStack Training | Dec 31 to Jan 15 | View Details |
OpenStack Training | Jan 04 to Jan 19 | View Details |
Ravindra Savaram is a Technical Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.