OpenStack Lab Network - ZTP Server (Part 1)

Configuring a DHCP server with Ansible

Posted by Michael Wadman on June 8, 2018


This post is the third in a series that plans to document my progress through installing and configuring a small OpenStack Lab.

For other posts in this series, see the overview section of the introduction post.

In the last post, we covered how to provision the switches and servers using Vagrant. This post will cover the installation of the DHCP part on the ZTP server using Ansible.


The first thing we need to do before we can configure the ZTP server is give it an IP address. This would usually be handled by Vagrant, but like I covered in the last post, this isn’t possible when we set the first interface to be bridged.

To do this we need to boot the machine, using the following command:

$ vagrant up openstack_ztp

Setting a Static IP address

Give it a minute or two and then connect to this through the VirtualBox. To login, use the username/password of “vagrant/vagrant” (which is also the default for all Vagrant boxes).

There is just one place that we need to change the configuration; under “/etc/network/interfaces”, change the section below:

# The primary network interface

auto enp0s3
iface enp0s3 inet static

Once completed, we can reset the interface so that it picks up the new address:

$ sudo ifdown enp0s3 && sudo ifup enp0s3

Let’s verify that the new address has been picked up before we proceed.

$ ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 08:00:27:80:ff:83 brd ff:ff:ff:ff:ff:ff
    inet brd scope global enp0s3
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe80:ff83/64 scope link
       valid_lft forever preferred_lft forever

Connecting Vagrant

Meanwhile, back in the terminal session that you launched the “vagrant up” command in, you should see a message like the below:

Timed out while waiting for the machine to boot. This means that
Vagrant was unable to communicate with the guest machine within
the configured ("config.vm.boot_timeout" value) time period.

This simply means that we took too long configuring the IP address of the machine, and Vagrant timed out trying to connect to it. To remedy this, simply shut down the machine and then run the same vagrant up command.

This should result in a happier looking output and a successful exit from the command.

==> openstack_ztp: Machine booted and ready!

For one last test, we’ll ssh into this from the host machine:

$ ssh -l vagrant
[email protected] password:
Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-29-generic x86_64)

 * Documentation:
 * Management:
 * Support:
vagrant@vagrant:~$ exit

Installing Ansible

We’re going to use Ansible to configure the ZTP server (and all other guests).

In order to install this on Ubuntu, we need to add the official repository to our apt sources and then add the package itself:

There is an ansible package in the default repositories, but this isn’t kept up to date

$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

Once installed, we can run the below command to ensure this installed correctly and to confirm the version is up to date

$ ansible --version | head -n 1
ansible 2.6.2

A fresh install of Ansible will create the configuration directory “/etc/ansible”, which includes the files “ansible.cfg” and “hosts”. I won’t go into detail as to what these files do in this post.

If you are curious as to learning more about Ansible, you can always check out the slides that I created for a presentation here

We need to own these files ourselves if we want to do anything with them:

$ sudo chown -R $USER:$USER /etc/ansible/

Ansible Setup


We’re going to make just a few changes to the defaults in “ansible.cfg”. I’ve simply uncommented/changed the following lines to look like the below:

forks = 20 # Increases the simultaneous connections Ansible can make

nocows = 1 # Turns off cowsay during ansible runs

retry_files_enabled = False # Disables the creation of retry files

pipelining = True # Uses the same SSH session for multiple tasks on the same host



In the hosts file, we’re going to wipe everything already present and create the entries for all of our guests in the OpenStack lab now, so that we don’t need to come back and touch this file again:

# OpenStack Lab #


ansible_ssh_common_args="-o StrictHostKeyChecking=no"
ansible_ssh_extra_args="-o StrictHostKeyChecking=no"

# Cumulus Switches

openstack-cumulus-spine01 ansible_host=
openstack-cumulus-spine02 ansible_host=


openstack-cumulus-leaf01 ansible_host=
openstack-cumulus-leaf02 ansible_host=

openstack-cumulus-leaf03 ansible_host=
openstack-cumulus-leaf04 ansible_host=

# ZTP Server
cumulus-ztp ansible_host=

# OpenStack Hosts
openstack-control ansible_host=
openstack-compute ansible_host=

In the above we define a parent group, [openstack_lab], and assign some variables to it so that ansible knows how to log in.

Next, we define children groups so that we can run playbooks against certain subsets of hosts.
For example, we don’t want the configuration we apply to the switches to be applied to the ZTP server or vice-versa.

With the above defined, we should be able to test connectivity using the ansible ping module:

$ ansible -m ping openstack_ztp
cumulus-ztp | SUCCESS => {
    "changed": false,
    "ping": "pong"



Ansible Playbook

We’ll start off by writing the playbook that we’ll call to apply the configuration to the host.

Firstly, because I like to organise all of my playbooks into a directory structure, I’m going to create the “/etc/ansible/playbooks” directory. Under this directory I’m creating a new file named “openstack_ztp.yml”, with the following content:

- name: Installing and Configuring ISC's DHCP Server
  hosts: openstack_ztp
  become: true
  gather_facts: true
    - role: isc-dhcp
        - isc-dhcp

I’ve named the role: isc-dhcp, as we’re going to be implementing ZTP (and DHCP) using the isc-dhcp-server package on Ubuntu.

Ansible Role

There are already a ton of good roles out there for installing isc-dhcp-server. The DebOps dhcpd role is a great example, so I’ll clone this as a submodule into my local repository:

/etc/ansible$ git submodule add roles/isc-dhcp

Most likely your “/etc/ansible” directory isn’t a git repository, in which case just copy the files from the repository instead:

$ git clone --depth=1 /etc/ansible/roles/isc-dhcp && rm -rf !$/.git

Before we proceed, we need to remove the dependency of this role on the “debops.secret” role, as we’re not going to be using functionality that requires it. In the file “/etc/ansible/roles/isc-dhcp/meta/main.yml”, change the “dependencies” section to look like the below:

dependencies: []
#  - role: debops.secret

Now we can run the playbook and have Ansible configure our DHCP server:

$ ansible-playbook /etc/ansible/playbooks/openstack_ztp.yml
cumulus-ztp                : ok=6    changed=4    unreachable=0    failed=0

This results in the following configuration on our ZTP server:

$ cat /etc/dhcp/dhcpd.conf
# Ansible managed

not authoritative;

default-lease-time 64800;
max-lease-time 86400;

log-facility local7;

option domain-name "vm";

option domain-search "vm";
option dhcp6.domain-search "vm";

option domain-name-servers,;

# Generated automatically by Ansible

subnet netmask {
        option routers;

This is great because a subnet has already been created, but our hosts still won’t get an address (or the switches able to ZTP themselves). For that, we need to set some variables.

Ansible Variables

As per the documentation on the DebOps dhcpd role, we can either configure the subnet with a pool or set host entries.
Since I prefer the hosts route in this scenario, I’m going to create an Ansible variable named “dhcpd_hosts” and define each of our hosts.
I’ll put this variable into the openstack_lab group variables file “/etc/ansible/group_vars/openstack_lab/vars.yml”.

An example host looks like the following:

  - hostname: cumulus-spine01
    address: ''
    ethernet: '08:00:27:00:00:01'

Note that I’m setting the MAC address as per the static entries that were set in the Vagrantfile in my previous post.


With this set for all of our hosts, each of them will now get an IP address on booting. That isn’t good enough for our switches if we want to ZTP them.

Cumulus has good documentation on how to configure ZTP.
According to the page linked, we simply need to define DHCP option 239 (what the switches request when they ZTP boot), give this option a value and then associate it with the appropriate hosts.

In terms of variables in Ansible, this is pretty simple.
Again, we’ll use the variable names that the dhcpd role is expecting, which in this case are “dhcpd_options” and the “options” key under each host entry in the “dhcpd_hosts” dictionary.

With these defined (and a little bit of variable-ization), our file ends up looking like the following:

dhcpd_options: "{{ ztp_option_name }} code 239 = text;"
  - hostname: cumulus-spine01
    address: ''
    ethernet: '08:00:27:00:00:01'
    options: '{{ ztp_option_name }} "{{ ztp_url }}";'
  - hostname: cumulus-spine02
    address: ''
    ethernet: '08:00:27:00:00:02'
    options: '{{ ztp_option_name }} "{{ ztp_url }}";'
  - hostname: cumulus-leaf01
    address: ''
    ethernet: '08:00:27:00:00:11'
    options: '{{ ztp_option_name }} "{{ ztp_url }}";'
  - hostname: cumulus-leaf02
    address: ''
    ethernet: '08:00:27:00:00:12'
    options: '{{ ztp_option_name }} "{{ ztp_url }}";'
  - hostname: cumulus-leaf03
    address: ''
    ethernet: '08:00:27:00:00:13'
    options: '{{ ztp_option_name }} "{{ ztp_url }}";'
  - hostname: cumulus-leaf04
    address: ''
    ethernet: '08:00:27:00:00:14'
    options: '{{ ztp_option_name }} "{{ ztp_url }}";'
  - hostname: openstack-control
    address: ''
    ethernet: '08:00:27:00:00:31'
  - hostname: openstack-compute
    address: ''
    ethernet: '08:00:27:00:00:32'

ztp_option_name: "option cumulus-provision-url"
ztp_url: "http://{{ ansible_default_ipv4.address }}/{{ ztp_filename }}"
ztp_filename: ""

Which results in the following configuration on our ZTP host:

$ cat /etc/dhcp/dhcpd.conf
# Ansible managed

not authoritative;

default-lease-time 64800;
max-lease-time 86400;

log-facility local7;

option domain-name "vm";

option domain-search "vm";
option dhcp6.domain-search "vm";

option domain-name-servers;

# Configuration options
option cumulus-provision-url code 239 = text;

# Generated automatically by Ansible
subnet netmask {
        option routers;

host cumulus-spine01 {
        option cumulus-provision-url "";
        hardware ethernet 08:00:27:00:00:01;
host cumulus-spine02 {
        option cumulus-provision-url "";
        hardware ethernet 08:00:27:00:00:02;
host cumulus-leaf01 {
        option cumulus-provision-url "";
        hardware ethernet 08:00:27:00:00:11;
host cumulus-leaf02 {
        option cumulus-provision-url "";
        hardware ethernet 08:00:27:00:00:12;
host cumulus-leaf03 {
        option cumulus-provision-url "";
        hardware ethernet 08:00:27:00:00:13;
host cumulus-leaf04 {
        option cumulus-provision-url "";
        hardware ethernet 08:00:27:00:00:14;
host openstack-control {
        hardware ethernet 08:00:27:00:00:31;
host openstack-compute {
        hardware ethernet 08:00:27:00:00:32;


We’ve now booted, set an IP, and configured DHCP on the ZTP server.

We could boot our OpenStack servers now and they would be ready to configure themselves using Ansible. If we booted our Cumulus switches, they would get an IP address and know where to look for their ZTP script - but wouldn’t find any script file at the location

In the next post, I’ll cover the configuration of NGINX to serve this file to the switches so that we can finally boot them fully.



DebOps dhcpd Ansible Role
Chris Jean’s post about Git Submodules
Cumulus Technical Documentation - Zero Touch Provisioning

Versions used

Desktop Machine: kubuntu-18.04
VirtualBox: virtualbox-5.2.10
Vagrant: 2.1.2
Cumulus VX Vagrant Box: CumulusCommunity/cumulus-vx (virtualbox, 3.6.2)
Ubuntu Server Vagrant Box: geerlingguy/ubuntu1804 (virtualbox, 1.0.6)
Ansible: 2.6.2