This post is the first in a series that plans to document my progress through installing and configuring a OpenStack Lab, focusing primarily on the configuration of the network.
The series will roughly look like the following:
- Setup VirtualBox and virtual machines using Vagrant.
- Install and configure ZTP server and boot switches.
- Configure network underlay (eBGP) using Ansible
- Configure network overlay (VXLAN with EVPN) using Ansible.
- Initial deployment of OpenStack servers using OpenStack-Ansible
- Configuration of OpenStack.
- Integration of OpenStack Neutron with our Cumulus switches.
If you’re reading this sentence, the above is subject to change as I haven’t yet written all posts yet.
I’ve recently taken an interest in a few different topics.
These are, in no particular order:
- OpenStack: A cloud computing software platform. Neutron, the networking project, is especially interesting.
- Cumulus Linux: A network operating system that you can load onto ‘whitebox’ switches.
- Vagrant: Reproducible Virtual Machine deployment as code.
- ZTP (Zero Touch Provisioning): Used to provision a network device in the same way that a server can be, using DHCP.
- VXLAN (Virtual Extensible LAN): A layer 2 overlay/tunnelling network protocol used primarily in data centres.
- EVPN (Ethernet VPN): A MP-BGP address family used to route layer 2 addresses, commonly used in conjunction with VXLAN for routable layer 2 data centre fabrics.
a masochist a person who likes to learn as much as possible when I’m not procrastinating, I thought it best to build a lab network that would use all of the above technologies and document my progress.
I’ve drawn some quick diagrams that I’ll be using when setting up the lab.
The first network interface (as adapter 1 is Cumulus’ management interface – “eth0”) on all of the switches will be connected to a bridged adapter on my host machine (which also has access out to the internet).
I’ll also be creating three Ubuntu server 16.04 VM’s – one ZTP server and two to build the OpenStack ‘cloud’. Each of the server VM’s will have its’ first network interface connected to the bridge adapter too.
And the production network diagram:
After some initial research into both OpenStack and Cumulus (and using them together) I’ve decided to go with a leaf/spine network design. I’ve mainly chosen this because it is a common data centre network design and so is well documented.
The two OpenStack servers will each have two interfaces connected to two ‘top of rack’ switches, which they will use talk to using BGP to advertise their respective leaf switches.
I’ll go over the more specific choices when we get around to configuring the switches.
This was just a quick post, meant as a foundation for the rest of the series and for me to document before I start configuring anything.
The next post in the series covers the use Vagrant to set up the environment in VirtualBox.