Intro
Until now, Charmed OpenStack required at least two physical network interfaces (or bonds) on any OVS nodes (neutron-gateway, neutron-openvswitch (in DVR) or ovn-chassis). It is now possible to deploy Charmed OpenStack with converged networking on a single physical network interface (or bond) using OpenVSwitch for bridges and VLANs.
This document will use Ussuri OpenStack on Ubuntu 20.04 (Focal Fossa) with OVN as the SDN.
Requirements
- MAAS >= 2.9 (snap version from channel: 2.9/candidate)
- Juju >= 3.0 (snap version from channel: latest/edge)
- Infrastructure networking configured for 3 VLANs
tl;dr
The vast majority of setup for this solution is the infrastructure: MAAS and VLAN networking. Some of which is beyond the scope of this document. Bellow in the Infrastructure Hints section, I’ll point to some documentation on doing a virtual deployment of MAAS and the networking using OpenVSwitch for a fully virtual test run. I’ll also provide links to further documentation.
However, the complexity of the infrastructure setup hides the simplicity of actually using the solution. So we will begin with end result. The configuration can be summarized in two steps:
- Configure machine networking in MAAS to bond physical interfaces, add an OpenVSwitch bridge, and then add any VLAN interfaces required.
- Configure ovn-bridge-mappings and bridge-interface-mappings in the ovn-chassis configuration in the bundle to match the bond name and bridge name setup in step 1.
That’s it really.
Quick Start
Continuing with the end result here is the tl;dr documentation. The following assumes a working MAAS with 3 nodes commissioned and ready, juju bootstrapped, and infrastructure networking with 3 VLANs.
Step 1: Configuring MAAS Networking
Connect to the existing MAAS
In these examples I use the hostname “superbond-maas.” So the URL will look like this: http://superbond-maas:5240/MAAS/r/machines
Configure maas-node-1.maas

Create a Bond
On the network tab
- Select enp1s0 and ensp7s0
- Click the Create Bond button.

Configure the bond
- Leave the subnet unconfigured.
- Remember the bond name, in this case: bond0.
- Save the bond.

Create An OVS bridge
- Select the newly created bond0.
- Click the create bridge button.

Configure the OVS Bridge
- Name the bridge. Remember the bridge name, in this case: br-ex.
- Select Open VSwitch(ovs) for Bridge type.
- Configure the untagged (native) VLAN.
- Select the subnet associated with the untagged VLAN.
- Select autoassign IP mode.
- Save the bridge.

Add VLANs to the Bridge
- On the newly created bridge in the Actions drop down menu click Add alias or VLAN.

Configure the VLAN
- Set type to VLAN.
- Set VLAN
- Select the Subnet associated with the VLAN
- Set autoassign IP mode
- Add the VLAN

Repeat adding VLANs as necessary. This example node has VLAN 100 and VLAN 200.

Three nodes ready

Step 2 Configure ovn-chassis settings
Get an example bundle from openstack-bundles
- git clone https://github.com/openstack-charmers/openstack-bundles
- cd development/openstack-converged-networking-focal-ussuri/
Set ovn-bridge-mappings and bridge-interface-mappings
- ovn-bridge-mappings physnet1 is set to the OpenVSwitch bridge configured in step 1: br-ex
- bridge-interface-mappings is set to the OpenVSwitch bridge and the bond configured in step 1: br-ex:bond0
Bundle.yaml snippet
ovn-chassis:
charm: cs:ovn-chassis
# *** Please update the `bridge-interface-mappings` to values suitable ***
# *** for thehardware used in your deployment. See the referenced ***
# *** documentation at the top of this file. ***
options:
ovn-bridge-mappings: physnet1:br-ex
bridge-interface-mappings: br-ex:bond0
Configure Spaces
Update the variable section in openstack-converged-networking-spaces-overlay.yaml to match your spaces definitions in MAAS. In good OpenStack form we use the following here:
- admin
- internal
- public
openstack-converged-networking-spaces-overlay.yaml snippet
variables:
public-space: &public-space public
internal-space: &internal-space internal
admin-space: &admin-space admin

Step 3 Deploy OpenStack
OK, I cheated there are three steps. The first two steps configure the setup, but now we actually deploy the solution.
Add a juju model
If you have not already, create a model for the OpenStack deployment on the juju controller.
juju add-model converged-networking
Deploy the OpenStack Bundle with the spaces overlay
juju deploy ./bundle.yaml --overlay openstack-converged-networking-spaces-overlay.yaml
An unconfigured cloud
When the model settles run juju status to find your unconfigured cloud.
Model Controller Cloud/Region Version SLA Timestamp
converged-networking virtual-maas virtual-maas 3.0-beta1 unsupported 18:21:55Z
App Version Status Scale Charm Store Channel Rev OS Message
ceph-mon 15.2.11 active 3 ceph-mon charmstore 50 ubuntu Unit is ready and clustered
ceph-osd 15.2.11 active 3 ceph-osd charmstore 306 ubuntu Unit is ready (1 OSD)
ceph-radosgw 15.2.11 active 1 ceph-radosgw charmstore 291 ubuntu Unit is ready
cinder 16.3.0 active 1 cinder charmstore 306 ubuntu Unit is ready
cinder-ceph 16.3.0 active 1 cinder-ceph charmstore 258 ubuntu Unit is ready
cinder-mysql-router 8.0.23 active 1 mysql-router charmstore 48 ubuntu Unit is ready
dashboard-mysql-router 8.0.23 active 1 mysql-router charmstore 48 ubuntu Unit is ready
glance 20.0.1 active 1 glance charmstore 301 ubuntu Unit is ready
glance-mysql-router 8.0.23 active 1 mysql-router charmstore 48 ubuntu Unit is ready
keystone 17.0.0 active 1 keystone charmstore 319 ubuntu Application Ready
keystone-mysql-router 8.0.23 active 1 mysql-router charmstore 48 ubuntu Unit is ready
mysql-innodb-cluster 8.0.23 active 3 mysql-innodb-cluster charmstore 74 ubuntu Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api 16.3.1 active 1 neutron-api charmstore 290 ubuntu Unit is ready
neutron-api-plugin-ovn 16.3.1 waiting 1 neutron-api-plugin-ovn charmstore 2 ubuntu 'certificates' awaiting server certificate data, 'ovsdb-cms' incomplete
neutron-mysql-router 8.0.23 active 1 mysql-router charmstore 48 ubuntu Unit is ready
nova-cloud-controller 21.2.0 active 1 nova-cloud-controller charmstore 348 ubuntu Unit is ready
nova-compute 21.2.0 active 3 nova-compute charmstore 322 ubuntu Unit is ready
nova-mysql-router 8.0.23 active 1 mysql-router charmstore 48 ubuntu Unit is ready
ntp 3.5 active 3 ntp charmstore 41 ubuntu chrony: Ready
openstack-dashboard 18.3.3 active 1 openstack-dashboard charmstore 308 ubuntu Unit is ready
ovn-central 20.03.2 waiting 3 ovn-central charmstore 2 ubuntu 'ovsdb-peer' incomplete, 'certificates' awaiting server certificate data
ovn-chassis 20.03.2 waiting 3 ovn-chassis charmstore 6 ubuntu 'certificates' awaiting server certificate data, 'ovsdb' incomplete
placement 3.0.0 active 1 placement charmstore 15 ubuntu Unit is ready
placement-mysql-router 8.0.23 active 1 mysql-router charmstore 48 ubuntu Unit is ready
rabbitmq-server 3.8.2 active 1 rabbitmq-server charmstore 106 ubuntu Unit is ready
vault 1.5.4 blocked 1 vault charmstore 141 ubuntu Vault needs to be initialized
vault-mysql-router 8.0.23 active 1 mysql-router charmstore 48 ubuntu Unit is ready
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0 active idle 0/lxd/0 192.168.151.94 Unit is ready and clustered
ceph-mon/1* active idle 1/lxd/0 192.168.151.15 Unit is ready and clustered
ceph-mon/2 active idle 2/lxd/0 192.168.151.20 Unit is ready and clustered
ceph-osd/0 active idle 0 172.16.100.11 Unit is ready (1 OSD)
ceph-osd/1* active idle 1 172.16.100.12 Unit is ready (1 OSD)
ceph-osd/2 active idle 2 172.16.100.13 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 0/lxd/1 192.168.151.93 80/tcp Unit is ready
cinder/0* active idle 1/lxd/1 172.16.100.15 8776/tcp Unit is ready
cinder-ceph/0* active idle 172.16.100.15 Unit is ready
cinder-mysql-router/0* active idle 172.16.100.15 Unit is ready
glance/0* active idle 2/lxd/1 172.16.100.17 9292/tcp Unit is ready
glance-mysql-router/0* active idle 172.16.100.17 Unit is ready
keystone/0* active idle 0/lxd/2 172.16.100.20 5000/tcp Unit is ready
keystone-mysql-router/0* active idle 172.16.100.20 Unit is ready
mysql-innodb-cluster/0 active idle 0/lxd/3 172.16.200.21 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/1* active idle 1/lxd/2 172.16.200.17 Unit is ready: Mode: R/W, Cluster is ONLINE and can tolerate up to ONE failure.
mysql-innodb-cluster/2 active idle 2/lxd/2 172.16.200.19 Unit is ready: Mode: R/O, Cluster is ONLINE and can tolerate up to ONE failure.
neutron-api/0* active idle 1/lxd/3 172.16.100.16 9696/tcp Unit is ready
neutron-api-plugin-ovn/0* waiting idle 172.16.100.16 'certificates' awaiting server certificate data, 'ovsdb-cms' incomplete
neutron-mysql-router/0* active idle 172.16.100.16 Unit is ready
nova-cloud-controller/0* active idle 0/lxd/4 172.16.100.19 8774/tcp,8775/tcp Unit is ready
nova-mysql-router/0* active idle 172.16.100.19 Unit is ready
nova-compute/0 active idle 0 172.16.100.11 Unit is ready
ntp/2 active idle 172.16.100.11 123/udp chrony: Ready
ovn-chassis/2 waiting idle 172.16.100.11 'certificates' awaiting server certificate data, 'ovsdb' incomplete
nova-compute/1* active idle 1 172.16.100.12 Unit is ready
ntp/0* active idle 172.16.100.12 123/udp chrony: Ready
ovn-chassis/0* waiting idle 172.16.100.12 'certificates' awaiting server certificate data, 'ovsdb' incomplete
nova-compute/2 active idle 2 172.16.100.13 Unit is ready
ntp/1 active idle 172.16.100.13 123/udp chrony: Ready
ovn-chassis/1 waiting idle 172.16.100.13 'certificates' awaiting server certificate data, 'ovsdb' incomplete
openstack-dashboard/0* active idle 1/lxd/4 172.16.200.14 80/tcp,443/tcp Unit is ready
dashboard-mysql-router/0* active idle 172.16.200.14 Unit is ready
ovn-central/0 waiting idle 0/lxd/5 172.16.100.22 6641/tcp,6642/tcp 'ovsdb-peer' incomplete, 'certificates' awaiting server certificate data
ovn-central/1* waiting idle 1/lxd/5 172.16.100.14 6641/tcp,6642/tcp 'ovsdb-peer' incomplete, 'certificates' awaiting server certificate data
ovn-central/2 waiting idle 2/lxd/3 172.16.100.18 6641/tcp,6642/tcp 'ovsdb-peer' incomplete, 'certificates' awaiting server certificate data
placement/0* active idle 2/lxd/4 172.16.100.21 8778/tcp Unit is ready
placement-mysql-router/0* active idle 172.16.100.21 Unit is ready
rabbitmq-server/0* active idle 2/lxd/5 192.168.151.96 5672/tcp Unit is ready
vault/0* blocked idle 0/lxd/6 172.16.200.20 8200/tcp Vault needs to be initialized
vault-mysql-router/0* active idle 172.16.200.20 Unit is ready
Machine State DNS Inst id Series AZ Message
0 started 172.16.100.11 maas-node-3 focal default Deployed
0/lxd/0 started 192.168.151.94 juju-c7a75f-0-lxd-0 focal default Container started
0/lxd/1 started 192.168.151.93 juju-c7a75f-0-lxd-1 focal default Container started
0/lxd/2 started 172.16.100.20 juju-c7a75f-0-lxd-2 focal default Container started
0/lxd/3 started 172.16.200.21 juju-c7a75f-0-lxd-3 focal default Container started
0/lxd/4 started 172.16.100.19 juju-c7a75f-0-lxd-4 focal default Container started
0/lxd/5 started 172.16.100.22 juju-c7a75f-0-lxd-5 focal default Container started
0/lxd/6 started 172.16.200.20 juju-c7a75f-0-lxd-6 focal default Container started
1 started 172.16.100.12 maas-node-1 focal default Deployed
1/lxd/0 started 192.168.151.15 juju-c7a75f-1-lxd-0 focal default Container started
1/lxd/1 started 172.16.100.15 juju-c7a75f-1-lxd-1 focal default Container started
1/lxd/2 started 172.16.200.17 juju-c7a75f-1-lxd-2 focal default Container started
1/lxd/3 started 172.16.100.16 juju-c7a75f-1-lxd-3 focal default Container started
1/lxd/4 started 172.16.200.14 juju-c7a75f-1-lxd-4 focal default Container started
1/lxd/5 started 172.16.100.14 juju-c7a75f-1-lxd-5 focal default Container started
2 started 172.16.100.13 maas-node-2 focal default Deployed
2/lxd/0 started 192.168.151.20 juju-c7a75f-2-lxd-0 focal default Container started
2/lxd/1 started 172.16.100.17 juju-c7a75f-2-lxd-1 focal default Container started
2/lxd/2 started 172.16.200.19 juju-c7a75f-2-lxd-2 focal default Container started
2/lxd/3 started 172.16.100.18 juju-c7a75f-2-lxd-3 focal default Container started
2/lxd/4 started 172.16.100.21 juju-c7a75f-2-lxd-4 focal default Container started
2/lxd/5 started 192.168.151.96 juju-c7a75f-2-lxd-5 focal default Container started
Confirm OVS on the ovn-chassis units
On one of the ovn-chassis units you can confirm OVS is running the show with the ovs-vsctl show command. The output will show our bond: bond0, our bridge: br-ex, the VLANs we created: br-ex.100 and br-ex.200, and all the bridging interfaces that Juju needs for the LXD containers.
9be88604-75bc-43f8-b667-d382607e6eaa
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-ex
fail_mode: standalone
datapath_type: system
Port "1lxd5-1"
tag: 100
Interface "1lxd5-1"
Port "1lxd3-1"
tag: 100
Interface "1lxd3-1"
Port br-ex.200
tag: 200
Interface br-ex.200
type: internal
Port br-ex
Interface br-ex
type: internal
Port "1lxd1-1"
tag: 100
Interface "1lxd1-1"
Port "1lxd4-1"
tag: 200
Interface "1lxd4-1"
Port bond0
Interface bond0
type: system
Port "1lxd2-0"
Interface "1lxd2-0"
Port "1lxd4-0"
Interface "1lxd4-0"
Port "1lxd5-0"
Interface "1lxd5-0"
Port "1lxd1-2"
tag: 200
Interface "1lxd1-2"
Port "1lxd5-2"
tag: 200
Interface "1lxd5-2"
Port "1lxd1-0"
Interface "1lxd1-0"
Port "1lxd3-2"
tag: 200
Interface "1lxd3-2"
Port br-ex.100
tag: 100
Interface br-ex.100
type: internal
Port "1lxd0-0"
Interface "1lxd0-0"
Port "1lxd3-0"
Interface "1lxd3-0"
Port "1lxd2-1"
tag: 200
Interface "1lxd2-1"
Bridge br-int
fail_mode: secure
datapath_type: system
Port br-int
Interface br-int
type: internal
Port ovn-maas-n-0
Interface ovn-maas-n-0
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.200.13"}
Port ovn-maas-n-1
Interface ovn-maas-n-1
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.200.11"}
ovs_version: "2.13.3"
ip address output look like this
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether 52:54:00:28:fd:fd brd ff:ff:ff:ff:ff:ff
3: enp7s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP group default qlen 1000
link/ether 52:54:00:28:fd:fd brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 52:54:00:28:fd:fd brd ff:ff:ff:ff:ff:ff
5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 4e:be:67:9f:44:0c brd ff:ff:ff:ff:ff:ff
6: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 52:0d:d4:47:9d:46 brd ff:ff:ff:ff:ff:ff
inet 192.168.151.95/24 brd 192.168.151.255 scope global br-ex
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe28:fdfd/64 scope link
valid_lft forever preferred_lft forever
7: br-ex.200: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 52:0d:d4:47:9d:46 brd ff:ff:ff:ff:ff:ff
inet 172.16.200.12/24 brd 172.16.200.255 scope global br-ex.200
valid_lft forever preferred_lft forever
inet6 fe80::500d:d4ff:fe47:9d46/64 scope link
valid_lft forever preferred_lft forever
8: br-ex.100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether 52:0d:d4:47:9d:46 brd ff:ff:ff:ff:ff:ff
inet 172.16.100.12/24 brd 172.16.100.255 scope global br-ex.100
valid_lft forever preferred_lft forever
inet6 fe80::500d:d4ff:fe47:9d46/64 scope link
valid_lft forever preferred_lft forever
9: lxdbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 00:16:3e:3b:0b:62 brd ff:ff:ff:ff:ff:ff
inet 10.88.31.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
13: 1lxd3-0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether da:7a:3f:db:b0:94 brd ff:ff:ff:ff:ff:ff link-netnsid 2
15: 1lxd1-0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 52:60:ca:65:95:36 brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: 1lxd5-0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether da:cf:25:7c:4d:15 brd ff:ff:ff:ff:ff:ff link-netnsid 5
19: 1lxd5-1@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 7e:de:f2:d1:e2:38 brd ff:ff:ff:ff:ff:ff link-netnsid 5
21: 1lxd3-1@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether da:e5:47:90:a0:1b brd ff:ff:ff:ff:ff:ff link-netnsid 2
23: 1lxd4-0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether c6:55:ae:01:b3:9f brd ff:ff:ff:ff:ff:ff link-netnsid 3
25: 1lxd2-0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 1e:e5:9c:cd:10:87 brd ff:ff:ff:ff:ff:ff link-netnsid 0
27: 1lxd1-1@if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether f2:a5:02:75:b9:86 brd ff:ff:ff:ff:ff:ff link-netnsid 4
29: 1lxd3-2@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 22:7d:9a:63:09:65 brd ff:ff:ff:ff:ff:ff link-netnsid 2
31: 1lxd2-1@if30: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 76:96:24:03:ac:69 brd ff:ff:ff:ff:ff:ff link-netnsid 0
33: 1lxd0-0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 1e:11:82:cd:0b:93 brd ff:ff:ff:ff:ff:ff link-netnsid 1
35: 1lxd1-2@if34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether e2:74:cf:22:b8:c5 brd ff:ff:ff:ff:ff:ff link-netnsid 4
37: 1lxd5-2@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 0e:15:f1:b5:92:bd brd ff:ff:ff:ff:ff:ff link-netnsid 5
39: 1lxd4-1@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master ovs-system state UP group default qlen 1000
link/ether 6e:83:4c:41:20:3b brd ff:ff:ff:ff:ff:ff link-netnsid 3
40: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 8e:9e:1d:65:e1:42 brd ff:ff:ff:ff:ff:ff
41: genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
link/ether 02:17:48:4a:37:8c brd ff:ff:ff:ff:ff:ff
inet6 fe80::17:48ff:fe4a:378c/64 scope link
valid_lft forever preferred_lft forever
Netplan config looks like this
network:
bonds:
bond0:
interfaces:
- enp1s0
- enp7s0
macaddress: 52:54:00:28:fd:fd
mtu: 1500
parameters:
down-delay: 0
gratuitious-arp: 1
mii-monitor-interval: 0
mode: active-backup
transmit-hash-policy: layer2
up-delay: 0
bridges:
br-ex:
addresses:
- 192.168.151.95/24
gateway4: 192.168.151.1
interfaces:
- bond0
macaddress: 52:54:00:28:fd:fd
mtu: 1500
nameservers:
addresses:
- 192.168.151.5
search:
- maas
openvswitch: {}
parameters:
forward-delay: 15
stp: false
ethernets:
enp1s0:
match:
macaddress: 52:54:00:28:fd:fd
mtu: 1500
set-name: enp1s0
enp7s0:
match:
macaddress: 52:54:00:7c:4e:85
mtu: 1500
set-name: enp7s0
version: 2
vlans:
br-ex.100:
addresses:
- 172.16.100.12/24
id: 100
link: br-ex
mtu: 1500
nameservers:
addresses:
- 172.16.100.5
search:
- maas
br-ex.200:
addresses:
- 172.16.200.12/24
id: 200
link: br-ex
mtu: 1500
nameservers:
addresses:
- 172.16.200.5
search:
- maas
Configure the cloud
Follow the charm guide for configuring a Charmed OpenStack Cloud:
https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/configure-openstack.html
Infrastructure Hints
Charm Deployment Guide
The Charmed OpenStack deployment guide has documentation on:
- Installing MAAS
- Installing Juju
- Installing OpenStack
- Configuring OpenStack
https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/
Virtual MAAS
If you don’t happen to have a pool of physical machines laying about, a network infrastructure with multiple VLANs as well as access to the switching fabric, the MAAS documentation has a walk through on how to setup a virtual MAAS. That includes creating VMs for the maas controller and nodes to commission:
https://maas.io/docs/snap/2.9/ui/give-me-an-example-of-maas
KVM Installation documentation:
https://help.ubuntu.com/community/KVM/Installation
Network Hints
Deployment machine
The deployment machine in this example is running the KVM which hosts the virutal maas controller and the commissioned VM nodes. It uses OpenVSwitch to virtually generate the network including VLANs.
#!/bin/bash -e
BR=ovsbr0
ovs-vsctl add-br ${BR}
VLAN=default0
ovs-vsctl add-port ${BR} ${VLAN} -- set Interface ${VLAN} type=internal
ip addr add 192.168.151.3/24 dev ${VLAN}
ip link set ${VLAN} up
VLAN=vlan100
ovs-vsctl add-port ${BR} ${VLAN} tag=100 -- set Interface ${VLAN} type=internal
ip addr add 172.16.100.1/24 dev ${VLAN}
ip link set ${VLAN} up
VLAN=vlan200
ovs-vsctl add-port ${BR} ${VLAN} tag=200 -- set Interface ${VLAN} type=internal
ip addr add 172.16.200.1/24 dev ${VLAN}
ip link set ${VLAN} up
# Add the libvirt network to the OVS bridge as a trunk port
ovs-vsctl add-port ${BR} vnet0
The command ovs-vsctl show displays something like the following:
d2d3d0f4-55ef-40cc-9014-efbd8f43522d
Bridge ovsbr0
Port default0
tag: 1
Interface default0
type: internal
Port vlan100
tag: 100
Interface vlan100
type: internal
Port ovsbr0
Interface ovsbr0
type: internal
Port vnet0
tag: 1
trunks: [1, 100, 200]
Interface vnet0
Port vlan200
tag: 200
Interface vlan200
type: internal
ovs_version: "2.13.1"
Maas Controller
The maas controller is a KVM VM and is NOT running OpenVSwitch. So we use standard Linux tools to setup the networking.
#!/bin/bash
ip addr add 192.168.151.5/24 dev enp6s0
ip link set enp6s0 up
sudo ip route add default via 192.168.151.1
ip link add link enp6s0 name enp6s0.100 type vlan id 100
ip addr add 172.16.100.5/24 dev enp6s0.100
ip link set enp6s0.100 up
ip link add link enp6s0 name enp6s0.200 type vlan id 200
ip addr add 172.16.200.5/24 dev enp6s0.200
ip link set enp6s0.200 up
Conclusion
As stated at the start of this document the hard part is the infrastructure. The actual configuration and deployment of a converged networking Charmed OpenStack on MAAS is rather straight forward. The Infrastructure Hints section attempts to point you in the right direction to find documentation on a Virtual MAAS deployment using Libvirt KVM and an OpenVSwitch network configuration to mimic a switching fabric VLANS and all. Each of the hints sections could be a document on their own but the details of which are beyond the scope of this document.
One no longer has to have multiple physical interfaces (or multiple bonds) in each network node for a Charmed OpenStack deployment. OpenVSwitch, MAAS and Juju can handle the network stack to provide a highly efficient converged network.