So we’re now at Part 4 of the Openstack Lab series. In the prior posts, I discussed selecting hardware, the thought process behind why I selected what I did and how I’ve started to prep the environment. I’m basically writing this post in conjunction with each step I’m taking as to take care to not miss a step. I listed every little gotcha experienced along the way, because well.. I’m not perfect. I also wanted this post to be as realistic as can be and accuracy is best had by documenting in parallel with “doing”.
Where I am at this point
I’ve configured the 4 nodes, have them running and installed with Ubuntu 14.04 LTS. The configurations are pretty solid up to this point including having all the interfaces configured. I replaced the 2 x 8 Port Gigabit switches I originally purchased with a single 24 port Netgear Gigabit switch I picked up on Craigslist for $40.00 (Not a bad deal, huh). I selected Line / Power conditioners instead of using a cheap Surge Protector or a more expensive UPS. It seemed to give me a nice balance of “good power protection without spending an exorbitant amount on a “good” dual conversion UPS. I installed / extended my home powerline network with another node / module from TP-Link allowing me internet access / connectivity to / from the rest of the network while allowing a relatively unrestricted “Gigabit” backplane across the 4 nodes. I added another 2 TB to the controller node for the purpose of Cinder Volume usage in RAID-0 mode for better IOPs and performance using Hybrid drives. I wrote a quick command sequence / script to remotely power on all 4 nodes remotely so they’re not sitting around eating power while I’m not working on the lab. I also have the ability to power them all off with a single command as well. This has already saved me a lot of headache.
Here’s a video of the setup as it is right now in my garage.
Once I get the initial set-up configured, I will be installing the Infiniband switch and cards to make it much zippier / peppier.
In the meantime, let’s move on to the next step(s).
Ubuntu and little changes / things to deal with
While configuring the network interfaces manually, we would need to configure dns servers. In a traditional non-Ubuntu install, we would just modify /etc/resolv.conf with DNS servers and be done with the whole thing. Guess what? Not going to happen.
The resolv.conf file is automatically generated at boot time using the resolvconf utility and any changes you would make by manually editing the file will be lost on reboot. Fret not fellow administrator! It’s still relatively easy to do. The difference is you won’t be modifying /etc/resolv.conf directly anymore.
Why was this changed? Honestly, I have no idea, but I suspect it is an attempt to make things easier to manage, although, for a command line guy like me, it’s just an extra step and extra fluff that isn’t needed.
Rather than going through the distro to remove / disable resolvconf auto-generation, let’s just stick to the rules of the game and configure resolveconf to do its job.
Just edit /etc/resolvconf/resolv.conf.d/base as you would the original resolv.conf file.
Mine simply says:
That’s really it. Just modify that file instead. Now, you can just as easily type “resolveconf -u” and it will update and regenerate the resolv.conf file. OR you can do what I did which is to simply restart the nodes and see if all of it works.
To restart a node, log into it and type: “shutdown -r now” or..
Remember how I create those two little scripts? Simply, just type via cygwin console.. “bash clusteroff && bash clusteron” and all four nodes will be shutdown and restarted in parallel.
Like I said, I’m all about efficiency (meh.. call it what it is.. Lazy)
At this point, all four machines should have internet access and be configured properly.
Setting up the First Node – The Controller
The controller is exactly what it says it is. It is the central nexus of your openstack install. It keeps track of what VMs connect to what storage, have what network routes, IPs, and all the wonderful metadata that goes with it all. In this install, the controller will also run the Keystone service. What is Keystone? The Identity Management service. Simply put, it’s the security service. It will keep track of which users have what keys and when combined with the rest of the controller functions, it will orchestrate the instantiation and de-instantiation (is that even a word??) of everything “aaS” via the Rabbit-MQ message queue.
So let’s finally dig in and set up our very first Openstack node.
Everything Openstack is dependent on timing. If your timing is off, all holy-hell will break loose. We’re going to make sure timing is correct and synchronized across all of the nodes, but it’s absolutely critical to do this on the controller.
Execute the following command:
apt-get install ntp -y
This installs the ntp service.
Let’s just implement one little change before starting it up. We’ll want to set the iburst option in /etc/ntp.conf because it will help ntp set the processor frequency properly.
sed -i “s/ubuntu.pool.ntp.org/ubuntu.pool.ntp.org iburst/g” /etc/ntp.conf
Now, just start it up with the command:
service ntp start
Install Mysql and python mysql hooks with the following command:
apt-get install python-mysqldb mysql-server
REMEMBER TO Document your Passwords!
Edit /etc/mysql/my.cnf as follows:
bind-address = 0.0.0.0
in the [mysqld] stanza, add the following:
#Openstack Specific Changes
default-storage-engine = innodb
collation-server = utf8_general_ci
init-connect = ‘SET NAMES utf8’
character-set-server = utf8
Now save the file and restart mysql with the following command:
service mysql restart
Type the following to set up mysql properly:
Here is what I did.. Just follow along:
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MySQL
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MySQL to secure it, we’ll need the current
password for the root user. If you’ve just installed MySQL, and
you haven’t set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on…
Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.
You already have a root password set, so you can safely answer ‘n’.
Change the root password? [Y/n]
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
Remove anonymous users? [Y/n] y
Normally, root should only be allowed to connect from ‘localhost’. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] n
By default, MySQL comes with a database named ‘test’ that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n] y
– Dropping test database…
ERROR 1008 (HY000) at line 1: Can’t drop database ‘test’; database doesn’t exist
… Failed! Not critical, keep moving…
– Removing privileges on test database…
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] y
All done! If you’ve completed all of the above steps, your MySQL
installation should now be secure.
Thanks for using MySQL!
This basically removes the anonymous users and the test database if there was one. There’s no need for the test database considering it’s there for test purposes.
Moving on, let’s upgrade Ubuntu just to make sure it’s up to date. Type the following commands:
apt-get install python-software-properties -y
apt-get install software-properties-common -y
apt-get update && apt-get dist-upgrade && shutdown -r now
This will effectively update your server and reboot it 🙂
OK. After the reboot, let’s install RabbitMQ, because we need a message broker.
apt-get install rabbitmq-server
It comes configured with an account with the following credentials: guest / RABBIT_PASS
Just in case you want to change it, you would use the following command:
rabbitmqctl change_password guest rabbitmq_passwd
where “rabbitmq_passwd” is the actual password you want it set to.
OK.. I’m burnt from all this writing and transcribing. It’s tiring trying to be 100% accurate. I could easily just install all this in 20 minutes and go back to document it all in a post, but it honestly wouldn’t be feature complete. I know I would forget something, because it would be seemingly obvious to me, but as many of you know, when you’re following a blog post and it’s not complete / perfect, it’s incredibly annoying. So bear with me and the posts will continue. I have no idea how many different parts / postings this will be, but I’m sure it will be more than 10, but less than 100.. lol.
Yes, I could easily just use a pre-rolled script and call it a day, but would you learn anything? Not really. You’d learn how to set it up, but would have no idea how to troubleshoot / administer Openstack. Which effectively makes you useless. This lab series is about doing it manually and learning what everything does and how it all fits together. So please bear with me and be patient.
I have a feeling many of you have been considering doing something like this with old hardware you have laying around. You’ve been searching for a good “walk-through” that will actually explain a few concepts and is complete. This is what this blog post series is about. It’s not about cutting corners just to get the lab up and running. For those of you that are following along, but haven’t committed to doing this yet, you could take the time between my posts to actually set up your hardware and follow along 🙂