Taking the time to do something properly the first time will save you a lot of work in the future. This is especially true for managing machines. While it doesn’t seem like a big deal, powering up 4 machines individually is just a pain in the ass. The whole process of logging into 4 individual IPMIs and manually turning them on/off is painful after the second time.
Remember how I spoke about automation and orchestration? Well, here’s our first task. To create the first tool in our arsenal that will allow us to control power for the four nodes.
I’m going to write two scripts to do this. (Well, I’m not even going to call them scripts, but rather strings of commands).
DISCLAIMER: I know I can just write one script to do this and use functions as well as put in some sanity / status checks as well as exception handling, but if I did that, it would take significantly more time (ok.. more like 15 min.) to make it look nice and pretty. I will be doing that later, but for now, these are quick and dirty placeholder scripts (command strings) that will do the job.
Background on My Environment
I use windows. I’m a linux architect, but I’m also all over the place when it comes to my personal choice of technologies. I have a 17″ Macbook Pro running Bootcamp (Windows 7). Even the Macbook Pro itself has been bastardized / customized to suit my personal needs. It has 16 GB RAM and a 1 TB SSD. I choose to use the best technology for the job and when it comes to dealing with clients, I run Windows 7, because the Microsoft Office suite is a necessity in my business.
I installed Cygwin and Console2 to allow me to get a bash prompt from windows that is much more suited to my tastes. I have a native ssh client that isn’t putty and I also have some basic tools for the local operating environment. It works well for me as well.
Now that I’ve said my piece so that I won’t get flamed and criticized into never-never land by the linux diehards and Apple Fanboys, I’ll continue.
Automated Cluster Power Control
So like I said, I have 4 nodes in my Openstack cluster. Each one has an IPMI built in that allows for amongst other things, remote power control. This allows me to not only automate, but also orchestrate things in the future when we introduce it. (Which will be a part of this series).
The first step is to set up a two scripts that will power on and off the entire cluster programatically. In this case, the scripts simply reach out to each ipmi and turns the power on and off with a single command.
The manual command to do this is simply:
smcipmitool <ip address of ipmi> <username> <password> ipmi power up to turn it on.
smcipmitool <ip address of ipmi> <username> <password> ipmi power softshutdown to turn it off.
I simply placed these commands into two scripts called: clusteron and clusteroff
There’s another little thing you’ll need to do and that is download the ipmitool utility from Supermicro’s website. I simply extracted the zip file and dropped everything into my /bin folder in c:\cygwin, effectively making all those commands available to me in bash without having to specify the complete path to the command.
If you’re running linux, osx, or anything else, there are java versions of these tools as well so compatibility on your platform shouldn’t be an issue.
Once you’ve created these commands, test them. Turn the servers on and off. Make sure it works.
While it sounds stupid, there’s a feeling of accomplishment that comes with watching your servers physically turn on and off remotely 🙂
Chocolate or Vanilla? Flavors of Linux; I choose Swirl
We all have our favorite flavors of linux. Mostly though, it falls into two major camps. CentOS/RHEL and Ubuntu. For the purposes of this project / lab, I’m going to install two separate clusters; one CentOS and one Ubuntu. The initial cluster will be on Ubuntu with the secondary being on CentOS. So let’s start with Ubuntu. I’m going to use the most recent 14.04 LTS version, because many of you will be deploying that on your production enterprises.
I’m also not going to go into very much detail about installing a base Ubuntu OS on a node, because, frankly if you’re unable to do that then this blog series will be way over your head and you’re just going to frustrate yourself into oblivion. There’s being dropped into the frying pan and just plain pouring gasoline over yourself while lighting a match. Being unable to boot off a CD / USB or ISO and installing the base OS is pretty much akin to the latter.
IF you really want to learn Openstack, I would recommend taking some online courses on linux (many are free) and once you feel comfortable enough to get around a machine, copy a file, and configure a network interface, come back and review these posts again; it will make more sense at that point.
For the Microsoft professional who wants to learn and install Openstack, I will be listing the commands and basic procedures in this post. The only thing you will need to know is how to use vim (or vi).
So I downloaded the iso from ubuntu.com and created a bootable usb for installing 14.04 LTS. Set the BIOS on each of the 4 nodes to boot from USB as the first boot device and installed Ubuntu to each of the nodes. The only options I installed were Ubuntu Server and ssh server. Make sure you install grub bootloader and write it to the boot block. (This is all handled by the Ubuntu installer).
At this point, you should have four working installs of Ubuntu. For the most part, you’ve got what you need, but there’s a little *gotcha* we need to take care of up front.
Grub default behavior for an unclean shutdown of the OS will cause it to sit in a recovery / repair menu without a default countdown. In other words, it won’t autoboot into the OS. This will prove to be very annoying the first time this happens. If you’re like me and your server is going to be somewhat inaccessible and you don’t want to use the IPMI manually to “push enter”, we’re going to modify this behavior.
Log in to each node and (AS ROOT), perform the following changes: (to become root, type “sudo -i” and enter your password.
edit the file /etc/grub.d/00_header
and change the following line:
now save your work and execute the following command from your shell.
That’s it! Now do this for all four nodes.
Interfaces and IPs and Nomenclature in general
Naming Machines. We all have our preferences. I’m a vanilla kind of guy. Names should be descriptive, but short. I’ve aptly named my machines node0 to node3
There are two interfaces (one public and one administrative) for each machine. I’ll be using my internal subnet for the public 192.168.150.0/24 and 192.168.160.0/24 for the administrative.
In Ubuntu, setting interfaces is as easy as editing one file. The file is /etc/network/interfaces
Here’s what mine looks like:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
iface lo inet loopback
# The primary network interface
iface eth0 inet static
iface eth1 inet static
Obviously, customize your file to reflect what you would like. I also have my IPMIs as: 192.168.150.160-163 It helps to make things common sense.
Now, you should be taking out a notepad and writing down all of your configurations, passwords, etc. I personally just use Google Docs. It allows me to keep things readily available and in one place. Do what you will with yours. Just a suggestion. (and make sure your google account password is impossibly secure (if you do this).
Check to make sure your four nodes are all working and configured properly.
I’m a bit burnt for this lesson now as I have to do each individual step and document it so that it is complete on the posting. Next post coming soon! Hope you’re all enjoying the series so far.