Finale: OpenStack Deployed on MAAS with Juju!

Finale: OpenStack Deployed on MAAS with Juju!

This is the final part 6 of my “Deploying OpenStack on MAAS 1.9+ with Juju” series, which is finally, after almost 1 year! \o/ In the last post – Advanced Networking: Deploying OpenStack on MAAS 1.9+ with Juju – I described the main concepts in Juju, and especially the recent networking enhancements, you could use today, with most recent stable release of Juju: 2.0.1.

Here the pieces finally come together, and at the end you should have your own, working private cloud, running the latest OpenStack (Newton), on 4 bare metal machines deployed by Juju via MAAS!

First, I’d like to thank all of my readers for all their support and patience! I know it took a long time, but I hope it will be all worth the wait 🙂

I know from experience a lot can go wrong installing a complicated software platform like OpenStack on such non-trivial networking setup. And when it does, it’s easy to start from scratch with Juju, but it still wastes time and bears frustration. I’d like to spare you that last bit by first going through a “pre-flight check” deployment. Using a simple Pinger charm I wrote for that purpose (although it’s generally useful as well). The charm does not follow the best practices about writing charms for Juju, but that’s intentionally so – to keep it simple. Pinger (source) contains the mandatory metadata, less than 150 lines of bash script, and a single config option. What it does is simple – runs ping(8) on a number of targets – configurable list (extra-targets setting), along with the private address and any explicitly bound extra-bindings (ep0..ep9)  specified at deployment time.

Those targets are checked on each unit, and over a peer relation units of the same pinger charm can exchange and check each other’s targets. The result of the check is conveniently indicated with “active / OK” or “blocked / FAIL” Juju status values. 

Test Deployment with Pinger(s) 

We’ll use a couple of scripts and YAML config files and follow roughly the same steps for the test deployment and the real OpenStack deployment.

Setting up Juju 2.0.1

You’ll need to add ppa:juju/stable to get 2.0.1 on trusty (or xenial):

$ sudo add-apt-repository ppa:juju/stable
$ sudo apt update
$ sudo apt install juju-2.0

# verify you got the correct version:
$ apt-cache policy juju-2.0

juju-2.0:
  Installed: 1:2.0.1-0ubuntu1~16.04.4~juju1
...

Now we should add our MAAS as a cloud in Juju (more details here). Create a maas-hw-cloud.yaml (e.g. in $HOME) with the following content:

clouds:
  maas-hw:
    type: maas
    auth-types: [oauth1]
    endpoint: http://10.14.0.1/MAAS/

The endpoint must match the one you used for logging into the MAAS CLI, but without the /api/2.0/ suffix! (Juju adds that automatically). Get it by running maas list:

hw-juju http://10.14.0.1/MAAS/api/2.0/ q5XN3EzGGCf2q4hnBP:gtpEpJwDy7pArKpR92:gdpUMYpWUgS2Njs8qV7f8dSbz7WwZh2e
hw-root http://10.14.0.1/MAAS/api/2.0/ 2WAF3wT9tHNEtTa9kV:A9CWR2ytFHwkN2mxN9:fTnk777tTFcV8xCUpTf85RfQLTeNcX7B

If should be using separate MAAS users for each Juju controller you bootstrap.

Now we need the a maas-hw-creds.yaml as well. Pick the “juju” user’s MAAS key, we previously created (you could use the “root” user, but it’s a bad practice) and change the highlighted maas-oauth with it:

credentials:
    maas-hw:
        default-credential: hw-juju
        hw-juju:
            auth-type: oauth1
            maas-oauth: q5XN3EzGGCf2q4hnBP:gtpEpJwDy7pArKpR92:gdpUMYpWUgS2Njs8qV7f8dSbz7WwZh2e

Finally, run these two to configure Juju to use your MAAS:

# (to change it once added, pass --replace at the end below)
$ juju add-cloud maas-hw ~/maas-hw-clouds.yaml

$ juju add-credential maas-hw -f ~/maas-hw-creds.yaml
# (verify with `juju show-cloud maas-hw`)

We’re now ready to bootstrap and deploy, but first a few details about it. 

Bundle, Script and Configuration

In the beginning I mentioned we’ll be using a modified openstack-base bundle (its revision 48 in particular I’ve tested). However, we’ll also use a small bash script to drive the deployment. Because we have exactly 4 machines to work with, we cannot deploy the original bundle directly (we’ll need 5 machines – one for the Juju Controller). Therefore, the script runs juju bootstrap then adds more units to the Juju Controller machine after deploying a scaled-down bundle with 3 machines only. The same approach will be used for the test deployment, and we’ll also need a small YAML config (which can be the same for the real deployment later).

Here are the script and config files I used:

#!/bin/bash

set -e
: ${JUJU_BIN:=/usr/bin/juju}

$JUJU_BIN version
$JUJU_BIN kill-controller pinger-maas-hw -y || true
$JUJU_BIN bootstrap maas-hw pinger-maas-hw --config=./maas-hw-config.yaml --to node-22
$JUJU_BIN switch pinger-maas-hw:controller

sleep 5

$JUJU_BIN deploy -m pinger-maas-hw:controller pinger-bundle-3-nodes.yaml
$JUJU_BIN add-unit -m pinger-maas-hw:controller pinger-compute-storage-node --to 0
$JUJU_BIN add-unit -m pinger-maas-hw:controller pinger-compute-storage-node --to lxd:0

watch "$JUJU_BIN status -m pinger-maas-hw:controller"

$JUJU_BIN kill-controller pinger-maas-hw
default-series: xenial
logging-config: <root>=TRACE
enable-os-refresh-update: true
enable-os-upgrade: false

And the bundle itself is quite simple:

machines:
  '1':
    constraints: arch=amd64 spaces=compute-external
    series: xenial
  '2':
    constraints: arch=amd64 spaces=storage-cluster
    series: xenial
  '3':
    constraints: arch=amd64 spaces=storage-cluster
    series: xenial
series: xenial
services:
  pinger-compute-storage-node:
    bindings:
      ep0: admin-api
      ep1: internal-api
      ep2: public-api
      ep3: storage-data
      ep4: storage-cluster
      ep5: compute-data
    charm: cs:~dimitern/pinger
    num_units: 4
    options:
      extra-targets: "10.14.0.1 google.com 10.50.0.1 10.100.0.1 10.150.0.1 10.200.0.1 10.250.0.1 10.30.0.1"
    to:
    - '2'
    - '3'
    - lxd:2
    - lxd:3
  pinger-network-node:
    bindings:
      ep0: admin-api
      ep1: internal-api
      ep2: public-api
      ep3: storage-data
      ep4: compute-data
    charm: cs:~dimitern/pinger
    num_units: 2
    options:
      extra-targets: "10.14.0.1 google.com 10.50.0.1 10.100.0.1 10.150.0.1 10.200.0.1 10.250.0.1"
    to:
    - '1'
    - lxd:1

Since we have 3 storage/compute nodes and 1 network node, we can bootstrap to one of the storage/compute nodes (I’ve used --to node-22 in the script to do that), deploy the 3-node bundle, and add more units to the Juju controller node. We use 2 copies of the Pinger charm, because the network and storage/compute nodes have different endpoint bindings requirements (compute-external space vs storage-cluster, respectively). Also, some units are placed inside LXD containers to ensure they will meet the network connectivity requirements for the OpenStack bundle. Notice how we’re using spaces constraints on the machines to pick the correct ones (only node-11 has access to the compute-external space, while all the storage/compute nodes have access to the storage-cluster space). Pinger’s endpoints (ep0..ep5) are bound to the spaces OpenStack charms expects to be present. To verify end-to-end connectivity on the nodes once deployed by MAAS, we add the IPs configured on MAAS interfaces (ending in .1) to the extra-targets setting of each of the pingers. Finally, to ensure DNS resolution works and deployed nodes can reach the internet, google.com (or any similar public hostname) is also present in extra-targets.

In the script could omit the -m pinger-maas-hw:controller arguments, if we have no other Juju controller / model running (the switch command is sufficient to ensure we deploy the remaining 3 nodes on the controller model). The initial (and final) kill-controller commands are not strictly necessary, just make it easier to run the script multiple times and clean up before/after each run.

Deploying (Successfully)

Now for the easy part, run $ ./deploy-4-nodes-pinger.sh  and sit back and wait for about 15m. If all goes well, you’ll eventually get the juju status (we’re watching) output like this: 

Model       Controller      Cloud/Region  Version
controller  pinger-maas-hw  maas-hw       2.0.1

App                          Version  Status  Scale  Charm   Store       Rev  OS      Notes
pinger-compute-storage-node           active      6  pinger  jujucharms    1  ubuntu
pinger-network-node                   active      2  pinger  jujucharms    1  ubuntu

Unit                            Workload  Agent  Machine  Public address  Ports  Message
pinger-compute-storage-node/0   active    idle   2        10.100.0.112           OK (all 45 targets reachable)
pinger-compute-storage-node/1   active    idle   3        10.100.1.121           OK (all 45 targets reachable)
pinger-compute-storage-node/2   active    idle   2/lxd/0  10.100.0.101           OK (all 45 targets reachable)
pinger-compute-storage-node/3   active    idle   3/lxd/0  10.100.0.103           OK (all 45 targets reachable)
pinger-compute-storage-node/4*  active    idle   0        10.14.1.122            OK (all 45 targets reachable)
pinger-compute-storage-node/5   active    idle   0/lxd/0  10.100.0.100           OK (all 45 targets reachable)
pinger-network-node/0*          active    idle   1        10.100.0.111           OK (all 17 targets reachable)
pinger-network-node/1           active    idle   1/lxd/0  10.100.0.102           OK (all 17 targets reachable)

Machine  State    DNS           Inst id              Series  AZ
0        started  10.14.1.122   4y3hge               xenial  zone2
0/lxd/0  started  10.100.0.100  juju-e438f1-0-lxd-0  xenial
1        started  10.100.0.111  4y3hdp               xenial  zone1
1/lxd/0  started  10.100.0.102  juju-e438f1-1-lxd-0  xenial
2        started  10.100.0.112  4y3hdq               xenial  zone1
2/lxd/0  started  10.100.0.101  juju-e438f1-2-lxd-0  xenial
3        started  10.100.1.121  4y3hdr               xenial  zone2
3/lxd/0  started  10.100.0.103  juju-e438f1-3-lxd-0  xenial

Relation  Provides                     Consumes                     Type
peer      pinger-compute-storage-node  pinger-compute-storage-node  peer
peer      pinger-network-node          pinger-network-node          peer

Congrats if you got that far – you’re ready to deploy OpenStack with a reasonable expectation for it to succeed 🙂 When satisfied, hit Ctrl+C to interrupt the watch command, and answer “y” to kill the Juju controller and prepare for the OpenStack deployment.  

Troubleshooting (Failures)

If you see “FAIL (x of y targets unreachable)” on one or more units, something’s not correctly configured (i.e. YMMV – in any case OpenStack won’t work as well until the issues are resolved).

Like described on the Pinger charm page, check the unit, status, or ping logs of the failed unit(s):

# change unit-pinger-network-node-0 as needed for another unit
$ juju debug-log -i unit-pinger-network-node-0 --replay --no-tail -l ERROR

# likewise, change pinger-storage-compute-node/2 as needed below
$ juju show-status-log pinger-storage-compute-node/2

A few tips for probable causes:

  • If you cannot resolve google.com, the MAAS DNS is not working properly (check /var/log/maas/maas.log and/or /var/log/syslog)
  • If google.com is resolvable, but unreachable your nodes cannot reach the internet (check if you have SNAT/MASQUERADE enabled in iptables for traffic coming from, but not destined to the 10.14.0.0/20 range)
  • If one or more endpoints of the units (or the MAAS .1 IPs) are unreachable, you likely have misconfigured VLANs/ports on either MAAS or the switches.
  • If nodes in the same zone are reachable but across zones there are not, definitely check the switches’ ports/VLANs.

Deploying OpenStack Newton

OK, assuming all went fine so far, we only need to change the script slightly (see below) and use the modified openstack-base bundle with 3 nodes only.

#!/bin/bash

set -e
: ${JUJU_BIN:=/usr/bin/juju}

$JUJU_BIN version
$JUJU_BIN kill-controller openstack-base-hw-x -y || true
$JUJU_BIN bootstrap maas-hw openstack-base-hw-x --config=./maas-hw-config.yaml --to node-22
$JUJU_BIN switch openstack-base-hw-x:controller

sleep 5

$JUJU_BIN deploy -m openstack-base-hw-x:controller bundle-3-nodes.yaml
$JUJU_BIN remove-unit -m openstack-base-hw-x:controller openstack-dashboard/0
$JUJU_BIN add-unit -m openstack-base-hw-x:controller openstack-dashboard --to lxd:0
$JUJU_BIN remove-unit -m openstack-base-hw-x:controller keystone/0
$JUJU_BIN add-unit -m openstack-base-hw-x:controller keystone --to lxd:0
$JUJU_BIN add-unit -m openstack-base-hw-x:controller ceph-osd --to 0
$JUJU_BIN add-unit -m openstack-base-hw-x:controller ceph-mon --to lxd:0
$JUJU_BIN add-unit -m openstack-base-hw-x:controller nova-compute --to 0

watch "$JUJU_BIN status -m openstack-base-hw-x:controller"

$JUJU_BIN kill-controller openstack-base-hw-x

Only notable change is the remove-unit / add-unit “dance” happening there. We want to use the bundle for declaring all the relations (as they are a lot), but we still want to end up with the same distribution of units and placement like with the original 4-node openstack-base bundle. We need to host the dashboard and keystone on the network-node (alongside the Juju controller), but they need to be also in the bundle to declare their relations, config, bindings, etc. I found it least painful to temporarily deploy openstack-dashboard/0 and keystone/0 on one of the storage/compute nodes, and then “move them” (remove and add a new unit of their application) to machine 0. The other few add-unit commands ensure we have the expected number of the units, matching the original bundle.

The bundle itself is quite a lot of lines (less than the original 4-node one, but still) to dump here I think. So I’ve added a link to it instead: http://paste.ubuntu.com/23424163/ It should be straightforward to compare it to the original and see the changes are mostly cosmetic, except the interesting ones describing endpoint bindings.

Show Time!

Like before, just run $ ./deploy-4-nodes.sh, sit back (a while longer this time – 30-45m) and watch OpenStack being stood up!

Once done, you should see output similar to the one below, and if you want to keep it do not answer “y” to the kill-controller question after hitting Ctrl+C! 🙂

Model       Controller           Cloud/Region  Version
controller  openstack-base-hw-x  maas-hw       2.0.1

App                    Version      Status   Scale  Charm                  Store       Rev  OS      Notes
ceph-mon               10.2.2       active       3  ceph-mon               jujucharms    6  ubuntu
ceph-osd               10.2.2       active       3  ceph-osd               jujucharms  238  ubuntu
ceph-radosgw           10.2.2       active       1  ceph-radosgw           jujucharms  245  ubuntu
cinder                 9.0.0        active       1  cinder                 jujucharms  257  ubuntu
cinder-ceph            9.0.0        active       1  cinder-ceph            jujucharms  221  ubuntu
glance                 13.0.0       active       1  glance                 jujucharms  253  ubuntu
keystone               10.0.0       active       1  keystone               jujucharms  258  ubuntu
mysql                  5.6.21-25.8  active       1  percona-cluster        jujucharms  246  ubuntu
neutron-api            9.0.0        active       1  neutron-api            jujucharms  246  ubuntu
neutron-gateway        9.0.0        active       1  neutron-gateway        jujucharms  232  ubuntu
neutron-openvswitch    9.0.0        active       3  neutron-openvswitch    jujucharms  238  ubuntu
nova-cloud-controller  14.0.1       active       1  nova-cloud-controller  jujucharms  292  ubuntu
nova-compute           14.0.1       active       3  nova-compute           jujucharms  259  ubuntu
ntp                                 unknown      4  ntp                    jujucharms    0  ubuntu
openstack-dashboard    10.0.0       active       1  openstack-dashboard    jujucharms  243  ubuntu
rabbitmq-server                     active       1  rabbitmq-server        jujucharms    5  ubuntu

Unit                      Workload  Agent  Machine  Public address  Ports           Message
ceph-mon/0                active    idle   2/lxd/0  10.100.0.110                    Unit is ready and clustered
ceph-mon/1                active    idle   3/lxd/0  10.100.0.103                    Unit is ready and clustered
ceph-mon/2*               active    idle   0/lxd/2  10.100.0.102                    Unit is ready and clustered
ceph-osd/0                active    idle   2        10.100.1.121                    Unit is ready (1 OSD)
ceph-osd/1                active    idle   3        10.100.0.112                    Unit is ready (1 OSD)
ceph-osd/2*               active    idle   0        10.14.1.122                     Unit is ready (1 OSD)
ceph-radosgw/0*           active    idle   1/lxd/0  10.100.0.108    80/tcp          Unit is ready
cinder/0*                 active    idle   2/lxd/1  10.100.1.123    8776/tcp        Unit is ready
  cinder-ceph/0*          active    idle            10.100.1.123                    Unit is ready
glance/0*                 active    idle   3/lxd/1  10.100.0.107    9292/tcp        Unit is ready
keystone/1*               active    idle   0/lxd/1  10.100.0.101    5000/tcp        Unit is ready
mysql/0*                  active    idle   1/lxd/1  10.100.0.104                    Unit is ready
neutron-api/0*            active    idle   2/lxd/2  10.100.0.105    9696/tcp        Unit is ready
neutron-gateway/0*        active    idle   1        10.100.0.111                    Unit is ready
  ntp/3                   unknown   idle            10.100.0.111
nova-cloud-controller/0*  active    idle   2/lxd/3  10.100.0.109    8774/tcp        Unit is ready
nova-compute/0            active    idle   2        10.100.1.121                    Unit is ready
  neutron-openvswitch/2   active    idle            10.100.1.121                    Unit is ready
  ntp/2                   unknown   idle            10.100.1.121
nova-compute/1            active    idle   3        10.100.0.112                    Unit is ready
  neutron-openvswitch/1   active    idle            10.100.0.112                    Unit is ready
  ntp/1                   unknown   idle            10.100.0.112
nova-compute/2*           active    idle   0        10.14.1.122                     Unit is ready
  neutron-openvswitch/0*  active    idle            10.14.1.122                     Unit is ready
  ntp/0*                  unknown   idle            10.14.1.122
openstack-dashboard/1*    active    idle   0/lxd/0  10.100.0.100    80/tcp,443/tcp  Unit is ready
rabbitmq-server/0*        active    idle   1/lxd/2  10.100.0.106    5672/tcp        Unit is ready

Machine  State    DNS           Inst id              Series  AZ
0        started  10.14.1.122   4y3hge               xenial  zone2
0/lxd/0  started  10.100.0.100  juju-256b83-0-lxd-0  xenial
0/lxd/1  started  10.100.0.101  juju-256b83-0-lxd-1  xenial
0/lxd/2  started  10.100.0.102  juju-256b83-0-lxd-2  xenial
1        started  10.100.0.111  4y3hdp               xenial  zone1
1/lxd/0  started  10.100.0.108  juju-256b83-1-lxd-0  xenial
1/lxd/1  started  10.100.0.104  juju-256b83-1-lxd-1  xenial
1/lxd/2  started  10.100.0.106  juju-256b83-1-lxd-2  xenial
2        started  10.100.1.121  4y3hdr               xenial  zone2
2/lxd/0  started  10.100.0.110  juju-256b83-2-lxd-0  xenial
2/lxd/1  started  10.100.1.123  juju-256b83-2-lxd-1  xenial
2/lxd/2  started  10.100.0.105  juju-256b83-2-lxd-2  xenial
2/lxd/3  started  10.100.0.109  juju-256b83-2-lxd-3  xenial
3        started  10.100.0.112  4y3hdq               xenial  zone1
3/lxd/0  started  10.100.0.103  juju-256b83-3-lxd-0  xenial
3/lxd/1  started  10.100.0.107  juju-256b83-3-lxd-1  xenial

Relation                 Provides               Consumes               Type
mon                      ceph-mon               ceph-mon               peer
mon                      ceph-mon               ceph-osd               regular
mon                      ceph-mon               ceph-radosgw           regular
ceph                     ceph-mon               cinder-ceph            regular
ceph                     ceph-mon               glance                 regular
ceph                     ceph-mon               nova-compute           regular
cluster                  ceph-radosgw           ceph-radosgw           peer
identity-service         ceph-radosgw           keystone               regular
cluster                  cinder                 cinder                 peer
storage-backend          cinder                 cinder-ceph            subordinate
image-service            cinder                 glance                 regular
identity-service         cinder                 keystone               regular
shared-db                cinder                 mysql                  regular
cinder-volume-service    cinder                 nova-cloud-controller  regular
amqp                     cinder                 rabbitmq-server        regular
cluster                  glance                 glance                 peer
identity-service         glance                 keystone               regular
shared-db                glance                 mysql                  regular
image-service            glance                 nova-cloud-controller  regular
image-service            glance                 nova-compute           regular
amqp                     glance                 rabbitmq-server        regular
cluster                  keystone               keystone               peer
shared-db                keystone               mysql                  regular
identity-service         keystone               neutron-api            regular
identity-service         keystone               nova-cloud-controller  regular
identity-service         keystone               openstack-dashboard    regular
cluster                  mysql                  mysql                  peer
shared-db                mysql                  neutron-api            regular
shared-db                mysql                  nova-cloud-controller  regular
cluster                  neutron-api            neutron-api            peer
neutron-plugin-api       neutron-api            neutron-gateway        regular
neutron-plugin-api       neutron-api            neutron-openvswitch    regular
neutron-api              neutron-api            nova-cloud-controller  regular
amqp                     neutron-api            rabbitmq-server        regular
cluster                  neutron-gateway        neutron-gateway        peer
quantum-network-service  neutron-gateway        nova-cloud-controller  regular
juju-info                neutron-gateway        ntp                    subordinate
amqp                     neutron-gateway        rabbitmq-server        regular
neutron-plugin           neutron-openvswitch    nova-compute           regular
amqp                     neutron-openvswitch    rabbitmq-server        regular
cluster                  nova-cloud-controller  nova-cloud-controller  peer
cloud-compute            nova-cloud-controller  nova-compute           regular
amqp                     nova-cloud-controller  rabbitmq-server        regular
neutron-plugin           nova-compute           neutron-openvswitch    subordinate
compute-peer             nova-compute           nova-compute           peer
juju-info                nova-compute           ntp                    subordinate
amqp                     nova-compute           rabbitmq-server        regular
ntp-peers                ntp                    ntp                    peer
cluster                  openstack-dashboard    openstack-dashboard    peer
cluster                  rabbitmq-server        rabbitmq-server        peer

 

Awesome! Now What?

You can follow the instructions in the openstack-base bundle README to get to the dashboard (Horizon), scale the nodes out, and verify it all works. There are a ton of resources online about OpenStack itself, but then again… You can also use Juju on top of your newly installed OpenStack to deploy workloads (you’d need to import some images first)!

Last but not least: have fun! 🙂 Experiment with your own in-house cloud, tweak it, tear it all or partially down and redeploy it (the process is the same). I might get to write some follow ups about some of these steps in the near future.

In Conclusion

Whew 🙂 we made it to the end! \o/ I hope it was useful and worth waiting for this final part (it almost didn’t happen TBH).

I’ll appreciate your comments and experiences, also try to help with issues you might encounter. Juju is more a free-time, side project for me now though (not a job), so it might take a while to respond.

All the best, thanks again, and good luck! 

Here are some convenient links to all articles in the series:

  1. Introduction
  2. Hardware Setup
  3. MAAS Setup
  4. Nodes Networking
  5. Advanced Networking
  6. Finale (this one)

 

Share

27 Comments on “Finale: OpenStack Deployed on MAAS with Juju!

  1. Hi Dimiter,
    Thanks for the article, it helped me see how easy is to deploy maas using the CLI. I was able to do my maas deployment using zotac computers (which don’t have the PMI) and when maas turn them on, I just need to send the wakeonlan from the router (no power using maas).
    So far I was able to deploy maas 2.0 with juju 2.0, here are some observations if someone is trying to do this:
    Some service getting deployed to LXD containers failed because they needed kvm so what I did to go around this was deploy a few services to create the nodes and at least 1 container (to have lxd configured by juju), first, I modified the bundle to only install services on the physical computer, here is the content of bundle_cust.yaml:

    machines:
      '0':
        constraints: arch=amd64 cpu-cores=4
        series: xenial
      '1':
        constraints: arch=amd64 cpu-cores=4
        series: xenial
      '2':
        constraints: arch=amd64 cpu-cores=8
        series: xenial
      '3':
        constraints: arch=amd64 cpu-cores=28
        series: xenial
    series: xenial
    services:
      ceph-osd:
        annotations:
          gui-x: '1000'
          gui-y: '500'
        charm: cs:~openstack-charmers-next/xenial/ceph-osd
        num_units: 3
        options:
          osd-devices: /srv/ceph-osd
          osd-reformat: 'yes'
        to:
        - '1'
        - '2'
        - '3'
      neutron-gateway:
        annotations:
          gui-x: '0'
          gui-y: '0'
        charm: cs:~openstack-charmers-next/xenial/neutron-gateway
        num_units: 1
        options:
          ext-port: enp3s0
          instance-mtu: 1456
        to:
        - '0'
      nova-compute:
        annotations:
          gui-x: '250'
          gui-y: '250'
        charm: cs:~openstack-charmers-next/xenial/nova-compute
        num_units: 3
        options:
          enable-live-migration: true
          enable-resize: true
          migration-auth-type: ssh
          virt-type: lxd
        to:
        - '1'
        - '2'
        - '3'
      ntp:
        annotations:
          gui-x: '1000'
          gui-y: '0'
        charm: cs:~openstack-charmers-next/xenial/ntp
        num_units: 0
      lxd:
        annotations:
          gui-x: '750'
          gui-y: '250'
        charm: cs:~openstack-charmers-next/xenial/lxd
        num_units: 0
        options:
          block-devices: /dev/sda
          storage-type: lvm
          overwrite: true
    

    Then I ran it (after bootstrapping the node)

    juju deploy bundle_cust.yaml

    Now I install 1 service that should be placed on a container inside each of the computer (to have lxd configured by juju)

    juju deploy cs:~openstack-charmers-next/xenial/ceph-mon --to lxd:1
    juju add-unit ceph-mon --to lxd:2
    juju add-unit ceph-mon --to lxd:3

    juju deploy cs:~openstack-charmers-next/xenial/ceph-radosgw --to lxd:0

    These units will fail but we run them to configure lxd. Lets now delete them

    juju remove-unit ceph-mon/0
    juju remove-unit ceph-mon/1
    juju remove-unit ceph-mon/2
    juju remove-application ceph-mon
    juju remove-unit ceph-radosgw/0
    juju remove-application ceph-radosgw

    Now lets install kvm, make it available on containers dn load the images that juju needs to avoid errors while installing services on containers. To make my reply shorter, run these commands on all phisical nodes (node 0 – node 4), here is node 0.


    juju ssh 0
    sudo apt install kvm
    sudo lxc
    sudo lxc config edit
    sudo lxc profile edit default

    Add the following at the end (on the devices section):

      kvm:
        path: /dev/kvm
        type: unix-char
    

    Save the file and now get the linux image to avoid errors


    sudo lxc image copy ubuntu:16.04 local: --alias ubuntu-xenial
    exit

    Now Repeat for node 1-3

    And now we can manually install the bundle


    juju deploy cs:~openstack-charmers-next/xenial/ceph-mon --to lxd:1
    juju add-unit ceph-mon --to lxd:2
    juju add-unit ceph-mon --to lxd:3

    Make sure that you use the config file moving forward to set the options
    Content:

     neutron-gateway:
        ext-port: enp3s0
        instance-mtu: 1456
     ceph-radosgw:
        use-embedded-webserver: true
     keystone:
        admin-password: openstack
     mysql:
        max-connections: 20000
     neutron-api:
        neutron-security-groups: true
     nova-cloud-controller:
        network-manager: Neutron
    


    juju deploy --to lxd:0 --config bun.yaml cs:~openstack-charmers-next/xenial/ceph-radosgw

    juju deploy --to lxd:2 cs:~openstack-charmers-next/xenial/glance
    juju deploy --to lxd:3 --config bun.yaml cs:~openstack-charmers-next/xenial/keystone
    juju deploy --to lxd:0 --config bun.yaml cs:~openstack-charmers-next/xenial/percona-cluster mysql
    juju deploy --to lxd:1 --config bun.yaml cs:~openstack-charmers-next/xenial/neutron-api
    juju deploy cs:~openstack-charmers-next/xenial/neutron-openvswitch
    juju deploy --to lxd:2 --config bun.yaml cs:~openstack-charmers-next/xenial/nova-cloud-controller
    juju deploy --to lxd:3 cs:~openstack-charmers-next/xenial/openstack-dashboard
    juju deploy --to lxd:0 cs:~openstack-charmers-next/xenial/rabbitmq-server

    relations (Please note that mysql has been changed by percona-cluster):


    juju relate nova-compute:amqp rabbitmq-server:amqp
    juju relate neutron-gateway:amqp rabbitmq-server:amqp
    juju relate keystone:shared-db mysql:shared-db
    juju relate nova-cloud-controller:identity-service keystone:identity-service
    juju relate glance:identity-service keystone:identity-service
    juju relate neutron-api:identity-service keystone:identity-service
    juju relate neutron-openvswitch:neutron-plugin-api neutron-api:neutron-plugin-api
    juju relate neutron-api:shared-db mysql:shared-db
    juju relate neutron-api:amqp rabbitmq-server:amqp
    juju relate neutron-gateway:neutron-plugin-api neutron-api:neutron-plugin-api
    juju relate glance:shared-db mysql:shared-db
    juju relate glance:amqp rabbitmq-server:amqp
    juju relate nova-cloud-controller:image-service glance:image-service
    juju relate nova-compute:image-service glance:image-service
    juju relate nova-cloud-controller:cloud-compute nova-compute:cloud-compute
    juju relate nova-cloud-controller:amqp rabbitmq-server:amqp
    juju relate nova-cloud-controller:quantum-network-service neutron-gateway:quantum-network-service
    juju relate nova-compute:neutron-plugin neutron-openvswitch:neutron-plugin
    juju relate neutron-openvswitch:amqp rabbitmq-server:amqp
    juju relate openstack-dashboard:identity-service keystone:identity-service
    juju relate nova-cloud-controller:shared-db mysql:shared-db
    juju relate nova-cloud-controller:neutron-api neutron-api:neutron-api
    juju relate ceph-mon:client nova-compute:ceph
    juju relate ceph-mon:client glance:ceph
    juju relate ceph-osd:mon ceph-mon:osd
    juju relate ntp:juju-info nova-compute:juju-info
    juju relate ntp:juju-info neutron-gateway:juju-info
    juju relate ceph-radosgw:mon ceph-mon:radosgw
    juju relate ceph-radosgw:identity-service keystone:identity-service
    juju relate nova-compute:lxd lxd:lxd

    And openstack deployed properly, here is the link to the charm that I used:
    https://jujucharms.com/u/openstack-charmers-next/openstack-lxd

    Regards,
    Elvis

    • Hey Elvis,

      Thanks for the awesome summary and I’m glad it worked for you! \o/ 🙂

      Note: I’ve reformatted your comment to make it easier to read – sorry my WP comments plugin seems to be messing things up sometimes :/

      • Guys,
        I finally found my problem. The guide created by Dimitern works perfectly, the details on https://jujucharms.com/openstack-base/ works fine but the problem was that for some reason, installing openstack base from the scratch using juju was defining two security groups.
        #While running the following command:
        neutron security-group-rule-create –protocol icmp –direction ingress default

        #I was getting the following msg:
        Multiple security_group matches found for name ‘default’, use an ID to be more specific.

        #Then I wanted to see all groups so I went to the help
        neutron -h

        #Found the command to list all groups and execute it
        neutron security-group-list

        #and BINGO, found why in my case, the openstack installation was not communicating #properly with VMs created inside of it. So I deleted the second group
        neutron security-group-delete f837ae83-1427-4838-ad0d-c0bb768306d7

        #Added my rules as described on the openstack base charm
        neutron security-group-rule-create –protocol icmp –direction ingress default
        #This time it took it! HURRAY!

        #Finally opened ssh
        neutron security-group-rule-create –protocol tcp –port-range-min 22 –port-range-max 22 –direction ingress default

        #Now I was able to access the instance

        #Please note that what I had to do to get it working was to define the interface for the br-ex, so neutron configuration looks like this:
        bridge-mappings: physnet1:br-ex
        data-port: br-ex:enp3s0.99

        Dimitern,
        If possible, please delete my last 2 reply on December 1, 2016 – 03:42 and on December 10, 2016 to make sure that readers don’t get confused.

        Regards,
        Elvis

  2. Dear Dimiter,

    Your last post is also great. My set up is on virtual machines. I am using a openvswitch to connect the nodes between them and with the maas node (this is also a virtual machine). I had run your script-bundle with no problem and it was a really quick final installation of the openstack.

    I have made some pastes in order not to have such a big output here. Here it is what i have tested:

    Your bundle (with minor changes according my virtual machines hardware):
    http://paste.ubuntu.com/23490871/

    Your script (vm instead of hw):
    http://paste.ubuntu.com/23490907/

    The output of charms installation:
    http://paste.ubuntu.com/23490913/

    The set up and creation of Openstack instances:
    http://paste.ubuntu.com/23490918/

    I have also tried to bootstrap juju in a separate virtual machine and after that deploy the charms manually but I had two problems (one solved and one didn’t). I had to configure the ceph-osds by hand (successfully) but I didn’t manage to use the ssh key for accessing the instances created in openstack.
    Here are all the details:
    http://paste.ubuntu.com/23516084/

    One last observation is that when i did the installation my vm nodes (node-12, node-21 and node-22) used almost 9GB from my hard disk. This has grown after 6 days up to 24GB at node-22, up to 21GB at node-21 and up to 18GB at node-12.

    What I have noticed is that the zfs.img is growing day by day!! I don’t know why this is happening. The whole thing will get stacked after some time…

    Here is the output per node:
    http://paste.ubuntu.com/23516183/

    Once more thanks a lot.
    Best Regards,
    Miltos

    • Hey Miltos,

      Great to hear you managed to deploy OpenStack successfully following the guide!
      As for the ceph-osd issues, I think the `osd-devices` configured in the bundle is incorrect:

      options:
            osd-devices: /var/lib/ceph/osd/ceph-node-12 /var/lib/ceph/osd/ceph-node-21 /var/lib/ceph/osd/ceph-node-22
            osd-format: ext4
      

      You shouldn’t need list all units’ paths like this. This config implies each of these paths is available on all units.
      I think you just need to set e.g. `osd-devices: /srv/ceph-osd` and make sure each of the storage nodes has this partition mounted (via the MAAS UI – create a LVG, and 2 LVs in it – mounted on / and /srv/ceph-osd, respectively). Then it should work out-of-the-box.

      Not sure why you’re getting the SSH issue – I’d suggest asking in #juju on FreeNode someone from the OpenStack charmers team about this.

      As for the ZFS image growing, I’d expect it to grow a bit – depending on what’s running, especially in a span of one week. If it grows too quickly it can be a problem, and I’d suggest filing a bug against Juju in Launchpad.

      HTH,
      Dimiter

  3. Hi Dimiter,

    Just to clarify that I didn’t have the problem with the osd-devices when i used your bundle although also there I had the same configuration. Everything run smoothly. This was the strange thing for me. In any way I understand your point and I will try to follow your advise.
    My nodes are mounted to an external storage device. There I have two large partitions 1T and 2T which I use to provide the virtual disks to my nodes. Each of the three storage nodes have its own two virtual disks: one mounted on / and one mounted on /var/lib/ceph/osd/ceph-node-## (e.g., for node-12 this is /var/lib/ceph/osd/ceph-node-12). If I understand correctly what I have to do is to mount all the storage nodes on the same second disk (image). Correct?
    Best Regards,
    Miltos

  4. Hi Dimiter,

    Thanks for the blog. I am wondering if MAAS 2.0 can be installed on ubuntu server 14.04 ? I have tried adding the ppa:maas/stable repository but the one that got installed is maas version 1.9.4. Thanks… Lucio

    • Hey Lucio,

      No, I believe you can’t install MAAS later than 1.9 on 14.04 (none of the team’s PPAs have ‘ubuntu/trusty’ versions). You could try upgrading to xenial (16.04) or (beware: YMMV) from source I guess.

      HTH,
      Dimiter

  5. Hi Dimiter,
    first thank you for this excellent guide. I got some used HP ProLiant Servers and I’m running MAAS Version 2.1.3+bzr5573-0ubuntu1 (16.04.1).
    I have all machines listed and network is configured to map our internal network configurations. I can use the machines with MAAS without problems and pings and ssh is working as well.
    However when coming to deploy pinger I experience huge problems. I don’t know what I made wrong but I nailed it down to a possible juju issue or wrong configuration.
    When I just bootstrap a juju controller and add one of the machines to the controller it will bootstrap the machine as expected. However when I run
    juju add-machine lxd:0
    it will run into the following error:
    machine-0: 03:15:51 WARNING juju.provisioner failed to start instance (unable to setup network: no obvious space for container “0/lxd/1”, host machine has spaces: “admin-api”, “compute-data”, “default”, “internal-api”, “public-api”, “storage-cluster”, “storage-data”), retrying in 10s (3 more attempts)
    machine-0: 03:16:01 WARNING juju.provisioner failed to start instance (unable to setup network: no obvious space for container “0/lxd/1”, host machine has spaces: “admin-api”, “compute-data”, “default”, “internal-api”, “public-api”, “storage-cluster”, “storage-data”), retrying in 10s (2 more attempts)
    machine-0: 03:16:11 WARNING juju.provisioner failed to start instance (unable to setup network: no obvious space for container “0/lxd/1”, host machine has spaces: “admin-api”, “compute-data”, “default”, “internal-api”, “public-api”, “storage-cluster”, “storage-data”), retrying in 10s (1 more attempts)
    machine-0: 03:16:22 ERROR juju.provisioner cannot start instance for machine “0/lxd/1”: unable to setup network: no obvious space for container “0/lxd/1”, host machine has spaces: “admin-api”, “compute-data”, “default”, “internal-api”, “public-api”, “storage-cluster”, “storage-data”
    I googled a bit and it seems there was in issue with juju but it got fixed:
    https://bugs.launchpad.net/juju/+bug/1564395
    I tried to dig deeper, but i’m stuck now.
    when I run
    # lxc profile show default
    it prints out this:
    config: {}
    description: Default LXD profile
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
    name: default

    I thought it might be the problem that I don’t have eth0 and instead enp2s0f0 network devices, however I changed it in one of the nodes in MAAS and it didn’t work as well.

    Any hints suggestions what I can do to dig deeper?

    Thanks a lot,
    Janos

    • You must add on bindings config “”: maas-space

      like :

      mysql:
          bindings:
            "": default
            shared-db: internal-api
          annotations:
            gui-x: '0'
            gui-y: '250'
          charm: cs:percona-cluster-249
          num_units: 2
          options:
            innodb-buffer-pool-size: 512M
            max-connections: 20000
          to:
          - lxd:2
          - lxd:3
      
  6. Dear Dimiter, dear all,

    first of all a huge thank you for your genius tutorial! Really brilliant!!! I’m now in part 6 and currently facing a first problem I couldnt solve by my own. When I start deploying charms neither your pinger or the openstack bundle, machines very often stuck in pending state … If I run show-machine for that affected machines I got:

    model: controller
    machines:
      "0":
        juju-status:
          current: started
          since: 06 May 2017 21:17:21+02:00
          version: 2.1.2
        dns-name: 10.100.0.112
        ip-addresses:
        - 10.100.0.112
        - 10.50.0.112
        - 10.250.0.112
        - 10.30.0.112
        - 10.14.0.112
        - 10.150.0.112
        - 10.200.0.112
        - 10.14.3.112
        instance-id: r67qgw
        machine-status:
          current: running
          message: Deployed
          since: 06 May 2017 21:17:23+02:00
        series: xenial
        containers:
          0/lxd/0:
            juju-status:
              current: pending
              since: 06 May 2017 21:17:58+02:00
            instance-id: pending
            machine-status:
              current: pending
              since: 06 May 2017 21:17:58+02:00
            series: xenial
          0/lxd/1:
            juju-status:
              current: pending
              since: 06 May 2017 21:17:59+02:00
            instance-id: pending
            machine-status:
              current: allocating
              message: 'copying image for juju/xenial/amd64 from https://cloud-images.ubuntu.com/daily:
                39% (10.18MB/s)'
              since: 06 May 2017 21:22:39+02:00
            series: xenial
          0/lxd/2:
            juju-status:
              current: pending
              since: 06 May 2017 21:18:00+02:00
            instance-id: pending
            machine-status:
              current: pending
              since: 06 May 2017 21:18:00+02:00
            series: xenial
        constraints: mem=3584M
        hardware: arch=amd64 cores=8 mem=32000M tags=virtual availability-zone=Zone1_DC-Core
        controller-member-status: has-vote
    

    I started the whole deploy process several times and its always different machines affected but always stucking at copying image at different percentage?!?!

    Any ideas about that? I don’t think its a problem with the internet connection …

    Thanks alot! Mathias.

    • Hey Mathias!

      Thank you, and sorry you’re having issues :/
      It looks like the deployment is stuck at downloading the LXD images. From the status output ISTM that’s shortly after bootstrapping – right? If you leave it for a few minutes, does it reach 100% on one of the LXD machines? How about the hardware nodes – do they all come up OK without containers? If so, try deploying a pinger unit on each hardware node, and verify the physical network / VLANs / switches setup is working. Once that’s fine, fixing LXD could be as easy as setting up a caching proxy somewhere on the network, or perhaps just ssh into the host and/or lxd machine and ping manually?

      Hope some of that helps 😉
      Cheers

  7. Hey Dimitern,

    Thanks for your answer! The download worked now … not exactly sure what the issue was, … anyway … but now I got another error … I’m not able to deploy the dashboard … in juju show-machine 0 I always got:

             current: provisioning error
              message: 'unable to setup network: no obvious space for container "0/lxd/3",
                host machine has spaces: "admin-api", "compute-data", "default", "internal-api",
                "public-api", "storage-cluster", "storage-data"'
    

    juju spaces shows:

    Space             Subnets
    admin-api         10.150.0.0/20
    compute-data      10.250.0.0/20
    compute-external  10.99.0.0/20
    default           10.14.0.0/20
    internal-api      10.100.0.0/20
    public-api        10.50.0.0/20
    storage-cluster   10.30.0.0/20
    storage-data      10.200.0.0/20
    unused            192.168.53.0/24
    

    Any ideas, what I’m doing wrong! Would be very happy about any idea!

    Thanks again!

    • Hey Mathias,

      From the provisioning error it seems the charm deployed to 0/lxd/3 is lacking specific space bindings.
      If that charm is `openstack-dashboard`, it does not specify bindings inside the bundle. Juju’s behavior has changed and now it seems the charm must specify explicit bindings.

      As I tested, the dashboard was supposed to be deployed inside a LXD container on machine 0 (hence the `- lxd:3` above), but it should work on machine 0 as well I guess. YMMV.

      What I can suggest is to try removing all units of `openstack-dashboard`, wait for them to be gone, and then remove the application itself. Once that’s done, try:

      `juju deploy cs:xenial/openstack-dashboard-243 –bind internal-api –to lxd:3`

      If that doesn’t work I’d suggest filing a bug against https://launchpad.net/juju-core and/or ask for help in #juju on IRC FreeNode.

      Cheers,
      Dimiter

  8. Dear Dimiter, thanks again!

    Dashboard deployed now and I can open the horizon web UI.

    But sorry I need to bother you again :/.

    Now I can’t login 🙁 … I tried admin and openstack as password … but the browser is loading for 2 min and than I got a chrome page is not working error … or in firefox: The connection was reset …

    I tried as well a `juju config keystone admin-password=”openstack” …` but same error …

    Now I think maybe a missing relation? But if I type

    `juju add-relation openstack-dashboard:identity-service keystone:identity-service`

    I got …

    `ERROR cannot add relation “openstack-dashboard:identity-service keystone:identity-service”: relation openstack-dashboard:identity-service keystone:identity-service already exists (already exists)`

    Sorry again … any more Ideas … its so close …

    BR Mathias.

    • Hey Mathias,

      I’m sorry, it’s been a while and my “juju-fu” seems to be getting “rusty” 🙂

      You’ll indeed need to re-create the relations between the newly deployed `openstack-dashboard` app to the other apps it needs to talk to. Once you’ve removed all units and the application itself as I suggested earlier, and re-deployed the app, a few additional steps are necessary (see below).

      Side note: That error message seems to suggest you found a bug, probably worth reporting. Juju should’ve removed the `openstack-dashboard:identity-service keystone:identity-service` relation (along with its corresponding document stored in MongoDB), when you removed the old `openstack-dashboard` application. Since the new one was deployed with the same name, it happens to find that state relation doc.

      Anyway, to full re-create the same scenario I’m describing (deployed from scratch with the script + bundle) for `openstack-dashboard`, do this (you’ve already done the first 2 steps, but it’s probably better to do them again):

      
      juju remove-unit openstack-dasboard/0  # (match your `juju status` output - remove all such units)
      juju remove-application openstackd-dashboarrd
      juju remove-relation openstack-dashboard keystone
      
      watch juju status  # ( until the units, relation, and the app are gone. )
      
      juju deploy --to lxd:3 cs:xenial/openstack-dashboard-243 --bind internal-api
      juju config openstack-dashboard openstack-origin=cloud:xenial-newton
      juju add-relation openstack-dashboard keystone
      

      That’s should fix it I think. Keep watching `juju status` for issues, give it some time to “settle”, try logging into Horizon. And read the charm’s docs (https://jujucharms.com/openstack-dashboard/) for more info. 😉

      Cheers,
      Dimiter

  9. Hello Dimiter,

    Thank you very much for this detailed documentation.

    By the way, did you have a chance to update this documentation to Openstack Pike release?

    Thanks,

    Marcelo

    • Thanks, glad the posts are useful!
      No, I haven’t and frankly won’t have the time for it, sorry :/

  10. Hi Dimiter,

    Like everyone has already said — AMAZING BLOG. Thanks a lot for the detailed guide.

    I’ve come to this final page of the Juju Openstack setup and have been facing issues with bootstrapping pinger to test my setup. When I run deploy-4-nodes-pinger.sh, I get the following output:

    root@maas:~/pinger# ./deploy-4-nodes-pinger.sh
    2.2.6-trusty-amd64
    ERROR controller pinger-maas-hw not found
    Creating Juju controller “pinger-maas-hw” on maas-hw
    Looking for packaged Juju agent version 2.2.6 for amd64
    Launching controller instance(s) on maas-hw…
    – /MAAS/api/1.0/nodes/node-08412b46-cb22-11e7-b610-0cc47a3a4136/ (arch=amd64 mem=128G cores=48)
    Fetching Juju GUI 2.10.2
    Waiting for address
    Attempting to connect to 192.14.0.102:22

    And it gets stuck at the connection phase where 192.14.0.102 is the node (mentioned using the –to directive) on which I am trying to bootstrap maas-hw. Running “juju status” and other suggestions like “juju debug-log -i unit-pinger-0 –replay –no-tail -l ERROR” and “juju run –unit pinger/0 — ‘cat /var/lib/juju/agents/unit-pinger-0/charm/ping.log'” (as given on the Pinger github page) all give the same error of:

    ERROR no API addresses

    The current setup is: MAAS 1.9 running on Ubuntu 14.04, juju 2.x and four nodes running Ubunutu 16.04.
    PXE booting 14.04 on the nodes threw the “error: ‘curtin’ failed: configuring disk: sda” error and it was given here (https://askubuntu.com/questions/847027/maas-2-1-on-16-10-cannot-deploy-a-trusty-machine-disks-missing) that trusty did not not have the drivers to access disks, so I went with 16.04 on the nodes.

    Any suggestions to successfully bootstrap maas-hw/pinger would be really healpful.

  11. Hi Dimiter, Would it be possible to install the juju-gui on the maas controller? I have tried and MAAS attempts to deploy it to one of the actual nodes. :-/ Once again, THANK YOU!

  12. hi Dimiter,
    there is no way to get run neutron-api in a proper way 🙁 a curl to the API endpoint will end in “curl: (52) Empty reply from server” 🙁 juju call me a nice working openstack but isnt.
    can i ask you to guide me to get a working environment? maybe a skype call with remote desktop?
    thx, volker…

  13. Hello,
    Thank you for the great set of guides.
    I’ve really learned a lot.
    I’ve been attempting to setup OpenStack on MAAS version: 2.4.2 and juju 2.5 bionic setup via snap.
    I have lots of questions.
    Is it possible for you to do another guide updated for MAAS version: 2.4.2 and juju 2.5 and OpenStack #58?
    So much has changed and I did attempt to set up and configure everything via Ubuntu 16.04 but ran in to so many errors on my current 18.04 maas provisioning service that I think just an updated guide where things are different would help.
    Besides, netplan is a huge change in 18.04, if you’ve never used it or heard of it that is.

    Thank you for all the hard work you put in.

  14. Hi,
    I prepared my setup exactly as you described here. My problem is with openstack version. I want to deploy openstack base focal but a few components are different. I made the necessary changes, the deployment starts, containers are running but juju status shows the containers in pending. Do you have a 4 nodes bundle yaml file with overlay?

    • Have you sorted this out? I’m working on preparing a bundle for a 4-node compute and 1-node network cloud.
      Have gone through all of the relevant blogs on this topic and feel have come to understand the juju spaces and maas syncing, and finally the naming convention as outlined above.

  15. Hi Bogdan,

    I’ve been working at this for some time. I was able to create an overlay for OpenStack Victoria with MAAS 2.9 and 20.04 as base OS. My biggest challenge thus far is setting bindings accordingly, and that I am using only a 1g interface for MAAS. This has caused an issue for me when provisioning 5 nodes (3 ceph, 3 compute, 1 network, 1 dashboard/api/vault).

    I’m keeping some of the changes on github at: https://github.com/cmera/openstack-bundles/tree/master/custom
    You can look at the overlay bindings and how it’s set up, I’ve tried to keep as much of the information provided here by this guide but there’s a bit of guesswork as the bundle yaml file that was included here is much different for Juju today.

Leave a Reply to djedje Cancel reply

Your email address will not be published.

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.