Juju Ubuntu

Finale: OpenStack Deployed on MAAS with Juju!

This is the final part 6 of my “Deploying OpenStack on MAAS 1.9+ with Juju” series, which is finally, after almost 1 year! \o/ In the last post – Advanced Networking: Deploying OpenStack on MAAS 1.9+ with Juju – I described the main concepts in Juju, and especially the recent networking enhancements, you could use today, with most recent stable release of Juju: 2.0.1.

Here the pieces finally come together, and at the end you should have your own, working private cloud, running the latest OpenStack (Newton), on 4 bare metal machines deployed by Juju via MAAS!

First, I’d like to thank all of my readers for all their support and patience! I know it took a long time, but I hope it will be all worth the wait 🙂

I know from experience a lot can go wrong installing a complicated software platform like OpenStack on such non-trivial networking setup. And when it does, it’s easy to start from scratch with Juju, but it still wastes time and bears frustration. I’d like to spare you that last bit by first going through a “pre-flight check” deployment. Using a simple Pinger charm I wrote for that purpose (although it’s generally useful as well). The charm does not follow the best practices about writing charms for Juju, but that’s intentionally so – to keep it simple. Pinger (source) contains the mandatory metadata, less than 150 lines of bash script, and a single config option. What it does is simple – runs ping(8) on a number of targets – configurable list (extra-targets setting), along with the private address and any explicitly bound extra-bindings (ep0..ep9)  specified at deployment time.

Those targets are checked on each unit, and over a peer relation units of the same pinger charm can exchange and check each other’s targets. The result of the check is conveniently indicated with “active / OK” or “blocked / FAIL” Juju status values. 

Test Deployment with Pinger(s) 

We’ll use a couple of scripts and YAML config files and follow roughly the same steps for the test deployment and the real OpenStack deployment.

Setting up Juju 2.0.1

You’ll need to add ppa:juju/stable to get 2.0.1 on trusty (or xenial):

Now we should add our MAAS as a cloud in Juju (more details here). Create a maas-hw-cloud.yaml (e.g. in $HOME) with the following content:

The endpoint must match the one you used for logging into the MAAS CLI, but without the /api/2.0/ suffix! (Juju adds that automatically). Get it by running maas list:

If should be using separate MAAS users for each Juju controller you bootstrap.

Now we need the a maas-hw-creds.yaml as well. Pick the “juju” user’s MAAS key, we previously created (you could use the “root” user, but it’s a bad practice) and change the highlighted maas-oauth with it:

Finally, run these two to configure Juju to use your MAAS:

We’re now ready to bootstrap and deploy, but first a few details about it. 

Bundle, Script and Configuration

In the beginning I mentioned we’ll be using a modified openstack-base bundle (its revision 48 in particular I’ve tested). However, we’ll also use a small bash script to drive the deployment. Because we have exactly 4 machines to work with, we cannot deploy the original bundle directly (we’ll need 5 machines – one for the Juju Controller). Therefore, the script runs juju bootstrap then adds more units to the Juju Controller machine after deploying a scaled-down bundle with 3 machines only. The same approach will be used for the test deployment, and we’ll also need a small YAML config (which can be the same for the real deployment later).

Here are the script and config files I used:

And the bundle itself is quite simple:

Since we have 3 storage/compute nodes and 1 network node, we can bootstrap to one of the storage/compute nodes (I’ve used --to node-22 in the script to do that), deploy the 3-node bundle, and add more units to the Juju controller node. We use 2 copies of the Pinger charm, because the network and storage/compute nodes have different endpoint bindings requirements (compute-external space vs storage-cluster, respectively). Also, some units are placed inside LXD containers to ensure they will meet the network connectivity requirements for the OpenStack bundle. Notice how we’re using spaces constraints on the machines to pick the correct ones (only node-11 has access to the compute-external space, while all the storage/compute nodes have access to the storage-cluster space). Pinger’s endpoints (ep0..ep5) are bound to the spaces OpenStack charms expects to be present. To verify end-to-end connectivity on the nodes once deployed by MAAS, we add the IPs configured on MAAS interfaces (ending in .1) to the extra-targets setting of each of the pingers. Finally, to ensure DNS resolution works and deployed nodes can reach the internet, google.com (or any similar public hostname) is also present in extra-targets.

In the script could omit the -m pinger-maas-hw:controller arguments, if we have no other Juju controller / model running (the switch command is sufficient to ensure we deploy the remaining 3 nodes on the controller model). The initial (and final) kill-controller commands are not strictly necessary, just make it easier to run the script multiple times and clean up before/after each run.

Deploying (Successfully)

Now for the easy part, run $ ./deploy-4-nodes-pinger.sh  and sit back and wait for about 15m. If all goes well, you’ll eventually get the juju status (we’re watching) output like this: 

Congrats if you got that far – you’re ready to deploy OpenStack with a reasonable expectation for it to succeed 🙂 When satisfied, hit Ctrl+C to interrupt the watch command, and answer “y” to kill the Juju controller and prepare for the OpenStack deployment.  

Troubleshooting (Failures)

If you see “FAIL (x of y targets unreachable)” on one or more units, something’s not correctly configured (i.e. YMMV – in any case OpenStack won’t work as well until the issues are resolved).

Like described on the Pinger charm page, check the unit, status, or ping logs of the failed unit(s):

A few tips for probable causes:

  • If you cannot resolve google.com, the MAAS DNS is not working properly (check /var/log/maas/maas.log and/or /var/log/syslog)
  • If google.com is resolvable, but unreachable your nodes cannot reach the internet (check if you have SNAT/MASQUERADE enabled in iptables for traffic coming from, but not destined to the 10.14.0.0/20 range)
  • If one or more endpoints of the units (or the MAAS .1 IPs) are unreachable, you likely have misconfigured VLANs/ports on either MAAS or the switches.
  • If nodes in the same zone are reachable but across zones there are not, definitely check the switches’ ports/VLANs.

Deploying OpenStack Newton

OK, assuming all went fine so far, we only need to change the script slightly (see below) and use the modified openstack-base bundle with 3 nodes only.

Only notable change is the remove-unit / add-unit “dance” happening there. We want to use the bundle for declaring all the relations (as they are a lot), but we still want to end up with the same distribution of units and placement like with the original 4-node openstack-base bundle. We need to host the dashboard and keystone on the network-node (alongside the Juju controller), but they need to be also in the bundle to declare their relations, config, bindings, etc. I found it least painful to temporarily deploy openstack-dashboard/0 and keystone/0 on one of the storage/compute nodes, and then “move them” (remove and add a new unit of their application) to machine 0. The other few add-unit commands ensure we have the expected number of the units, matching the original bundle.

The bundle itself is quite a lot of lines (less than the original 4-node one, but still) to dump here I think. So I’ve added a link to it instead: http://paste.ubuntu.com/23424163/ It should be straightforward to compare it to the original and see the changes are mostly cosmetic, except the interesting ones describing endpoint bindings.

Show Time!

Like before, just run $ ./deploy-4-nodes.sh, sit back (a while longer this time – 30-45m) and watch OpenStack being stood up!

Once done, you should see output similar to the one below, and if you want to keep it do not answer “y” to the kill-controller question after hitting Ctrl+C! 🙂

 

Awesome! Now What?

You can follow the instructions in the openstack-base bundle README to get to the dashboard (Horizon), scale the nodes out, and verify it all works. There are a ton of resources online about OpenStack itself, but then again… You can also use Juju on top of your newly installed OpenStack to deploy workloads (you’d need to import some images first)!

Last but not least: have fun! 🙂 Experiment with your own in-house cloud, tweak it, tear it all or partially down and redeploy it (the process is the same). I might get to write some follow ups about some of these steps in the near future.

In Conclusion

Whew 🙂 we made it to the end! \o/ I hope it was useful and worth waiting for this final part (it almost didn’t happen TBH).

I’ll appreciate your comments and experiences, also try to help with issues you might encounter. Juju is more a free-time, side project for me now though (not a job), so it might take a while to respond.

All the best, thanks again, and good luck! 

Here are some convenient links to all articles in the series:

  1. Introduction
  2. Hardware Setup
  3. MAAS Setup
  4. Nodes Networking
  5. Advanced Networking
  6. Finale (this one)

 

Share

21 Comments

  • November 7, 2016 - 14:49 | Permalink

    Hi Dimiter,
    Thanks for the article, it helped me see how easy is to deploy maas using the CLI. I was able to do my maas deployment using zotac computers (which don’t have the PMI) and when maas turn them on, I just need to send the wakeonlan from the router (no power using maas).
    So far I was able to deploy maas 2.0 with juju 2.0, here are some observations if someone is trying to do this:
    Some service getting deployed to LXD containers failed because they needed kvm so what I did to go around this was deploy a few services to create the nodes and at least 1 container (to have lxd configured by juju), first, I modified the bundle to only install services on the physical computer, here is the content of bundle_cust.yaml:

    Then I ran it (after bootstrapping the node)

    juju deploy bundle_cust.yaml

    Now I install 1 service that should be placed on a container inside each of the computer (to have lxd configured by juju)

    juju deploy cs:~openstack-charmers-next/xenial/ceph-mon --to lxd:1
    juju add-unit ceph-mon --to lxd:2
    juju add-unit ceph-mon --to lxd:3

    juju deploy cs:~openstack-charmers-next/xenial/ceph-radosgw --to lxd:0

    These units will fail but we run them to configure lxd. Lets now delete them

    juju remove-unit ceph-mon/0
    juju remove-unit ceph-mon/1
    juju remove-unit ceph-mon/2
    juju remove-application ceph-mon
    juju remove-unit ceph-radosgw/0
    juju remove-application ceph-radosgw

    Now lets install kvm, make it available on containers dn load the images that juju needs to avoid errors while installing services on containers. To make my reply shorter, run these commands on all phisical nodes (node 0 – node 4), here is node 0.


    juju ssh 0
    sudo apt install kvm
    sudo lxc
    sudo lxc config edit
    sudo lxc profile edit default

    Add the following at the end (on the devices section):

    Save the file and now get the linux image to avoid errors


    sudo lxc image copy ubuntu:16.04 local: --alias ubuntu-xenial
    exit

    Now Repeat for node 1-3

    And now we can manually install the bundle


    juju deploy cs:~openstack-charmers-next/xenial/ceph-mon --to lxd:1
    juju add-unit ceph-mon --to lxd:2
    juju add-unit ceph-mon --to lxd:3

    Make sure that you use the config file moving forward to set the options
    Content:


    juju deploy --to lxd:0 --config bun.yaml cs:~openstack-charmers-next/xenial/ceph-radosgw

    juju deploy --to lxd:2 cs:~openstack-charmers-next/xenial/glance
    juju deploy --to lxd:3 --config bun.yaml cs:~openstack-charmers-next/xenial/keystone
    juju deploy --to lxd:0 --config bun.yaml cs:~openstack-charmers-next/xenial/percona-cluster mysql
    juju deploy --to lxd:1 --config bun.yaml cs:~openstack-charmers-next/xenial/neutron-api
    juju deploy cs:~openstack-charmers-next/xenial/neutron-openvswitch
    juju deploy --to lxd:2 --config bun.yaml cs:~openstack-charmers-next/xenial/nova-cloud-controller
    juju deploy --to lxd:3 cs:~openstack-charmers-next/xenial/openstack-dashboard
    juju deploy --to lxd:0 cs:~openstack-charmers-next/xenial/rabbitmq-server

    relations (Please note that mysql has been changed by percona-cluster):


    juju relate nova-compute:amqp rabbitmq-server:amqp
    juju relate neutron-gateway:amqp rabbitmq-server:amqp
    juju relate keystone:shared-db mysql:shared-db
    juju relate nova-cloud-controller:identity-service keystone:identity-service
    juju relate glance:identity-service keystone:identity-service
    juju relate neutron-api:identity-service keystone:identity-service
    juju relate neutron-openvswitch:neutron-plugin-api neutron-api:neutron-plugin-api
    juju relate neutron-api:shared-db mysql:shared-db
    juju relate neutron-api:amqp rabbitmq-server:amqp
    juju relate neutron-gateway:neutron-plugin-api neutron-api:neutron-plugin-api
    juju relate glance:shared-db mysql:shared-db
    juju relate glance:amqp rabbitmq-server:amqp
    juju relate nova-cloud-controller:image-service glance:image-service
    juju relate nova-compute:image-service glance:image-service
    juju relate nova-cloud-controller:cloud-compute nova-compute:cloud-compute
    juju relate nova-cloud-controller:amqp rabbitmq-server:amqp
    juju relate nova-cloud-controller:quantum-network-service neutron-gateway:quantum-network-service
    juju relate nova-compute:neutron-plugin neutron-openvswitch:neutron-plugin
    juju relate neutron-openvswitch:amqp rabbitmq-server:amqp
    juju relate openstack-dashboard:identity-service keystone:identity-service
    juju relate nova-cloud-controller:shared-db mysql:shared-db
    juju relate nova-cloud-controller:neutron-api neutron-api:neutron-api
    juju relate ceph-mon:client nova-compute:ceph
    juju relate ceph-mon:client glance:ceph
    juju relate ceph-osd:mon ceph-mon:osd
    juju relate ntp:juju-info nova-compute:juju-info
    juju relate ntp:juju-info neutron-gateway:juju-info
    juju relate ceph-radosgw:mon ceph-mon:radosgw
    juju relate ceph-radosgw:identity-service keystone:identity-service
    juju relate nova-compute:lxd lxd:lxd

    And openstack deployed properly, here is the link to the charm that I used:
    https://jujucharms.com/u/openstack-charmers-next/openstack-lxd

    Regards,
    Elvis

    • dimitern
      November 7, 2016 - 14:55 | Permalink

      Hey Elvis,

      Thanks for the awesome summary and I’m glad it worked for you! \o/ 🙂

      Note: I’ve reformatted your comment to make it easier to read – sorry my WP comments plugin seems to be messing things up sometimes :/

      • December 13, 2016 - 07:09 | Permalink

        Guys,
        I finally found my problem. The guide created by Dimitern works perfectly, the details on https://jujucharms.com/openstack-base/ works fine but the problem was that for some reason, installing openstack base from the scratch using juju was defining two security groups.
        #While running the following command:
        neutron security-group-rule-create –protocol icmp –direction ingress default

        #I was getting the following msg:
        Multiple security_group matches found for name ‘default’, use an ID to be more specific.

        #Then I wanted to see all groups so I went to the help
        neutron -h

        #Found the command to list all groups and execute it
        neutron security-group-list

        #and BINGO, found why in my case, the openstack installation was not communicating #properly with VMs created inside of it. So I deleted the second group
        neutron security-group-delete f837ae83-1427-4838-ad0d-c0bb768306d7

        #Added my rules as described on the openstack base charm
        neutron security-group-rule-create –protocol icmp –direction ingress default
        #This time it took it! HURRAY!

        #Finally opened ssh
        neutron security-group-rule-create –protocol tcp –port-range-min 22 –port-range-max 22 –direction ingress default

        #Now I was able to access the instance

        #Please note that what I had to do to get it working was to define the interface for the br-ex, so neutron configuration looks like this:
        bridge-mappings: physnet1:br-ex
        data-port: br-ex:enp3s0.99

        Dimitern,
        If possible, please delete my last 2 reply on December 1, 2016 – 03:42 and on December 10, 2016 to make sure that readers don’t get confused.

        Regards,
        Elvis

        • dimitern
          December 13, 2016 - 08:40 | Permalink

          Yes, sure Elvis – done!

          And thanks for the feedback 🙂

  • Miltos
    November 22, 2016 - 12:11 | Permalink

    Dear Dimiter,

    Your last post is also great. My set up is on virtual machines. I am using a openvswitch to connect the nodes between them and with the maas node (this is also a virtual machine). I had run your script-bundle with no problem and it was a really quick final installation of the openstack.

    I have made some pastes in order not to have such a big output here. Here it is what i have tested:

    Your bundle (with minor changes according my virtual machines hardware):
    http://paste.ubuntu.com/23490871/

    Your script (vm instead of hw):
    http://paste.ubuntu.com/23490907/

    The output of charms installation:
    http://paste.ubuntu.com/23490913/

    The set up and creation of Openstack instances:
    http://paste.ubuntu.com/23490918/

    I have also tried to bootstrap juju in a separate virtual machine and after that deploy the charms manually but I had two problems (one solved and one didn’t). I had to configure the ceph-osds by hand (successfully) but I didn’t manage to use the ssh key for accessing the instances created in openstack.
    Here are all the details:
    http://paste.ubuntu.com/23516084/

    One last observation is that when i did the installation my vm nodes (node-12, node-21 and node-22) used almost 9GB from my hard disk. This has grown after 6 days up to 24GB at node-22, up to 21GB at node-21 and up to 18GB at node-12.

    What I have noticed is that the zfs.img is growing day by day!! I don’t know why this is happening. The whole thing will get stacked after some time…

    Here is the output per node:
    http://paste.ubuntu.com/23516183/

    Once more thanks a lot.
    Best Regards,
    Miltos

    • dimitern
      November 22, 2016 - 16:12 | Permalink

      Hey Miltos,

      Great to hear you managed to deploy OpenStack successfully following the guide!
      As for the ceph-osd issues, I think the osd-devices configured in the bundle is incorrect:

      You shouldn’t need list all units’ paths like this. This config implies each of these paths is available on all units.
      I think you just need to set e.g. osd-devices: /srv/ceph-osd and make sure each of the storage nodes has this partition mounted (via the MAAS UI – create a LVG, and 2 LVs in it – mounted on / and /srv/ceph-osd, respectively). Then it should work out-of-the-box.

      Not sure why you’re getting the SSH issue – I’d suggest asking in #juju on FreeNode someone from the OpenStack charmers team about this.

      As for the ZFS image growing, I’d expect it to grow a bit – depending on what’s running, especially in a span of one week. If it grows too quickly it can be a problem, and I’d suggest filing a bug against Juju in Launchpad.

      HTH,
      Dimiter

  • Miltos
    November 23, 2016 - 10:36 | Permalink

    Hi Dimiter,

    Just to clarify that I didn’t have the problem with the osd-devices when i used your bundle although also there I had the same configuration. Everything run smoothly. This was the strange thing for me. In any way I understand your point and I will try to follow your advise.
    My nodes are mounted to an external storage device. There I have two large partitions 1T and 2T which I use to provide the virtual disks to my nodes. Each of the three storage nodes have its own two virtual disks: one mounted on / and one mounted on /var/lib/ceph/osd/ceph-node-## (e.g., for node-12 this is /var/lib/ceph/osd/ceph-node-12). If I understand correctly what I have to do is to mount all the storage nodes on the same second disk (image). Correct?
    Best Regards,
    Miltos

    • dimitern
      November 23, 2016 - 11:05 | Permalink

      Hi Miltos,

      Yes, mounting the second image on the same mount point for all storage nodes should work. Then you can use that single mount point in osd-devices.
      For more details, have a look at the docs here: https://jujucharms.com/docs/stable/charms-storage

      Cheers,
      Dimiter

  • Lucio Menzel
    December 18, 2016 - 22:25 | Permalink

    Hi Dimiter,

    Thanks for the blog. I am wondering if MAAS 2.0 can be installed on ubuntu server 14.04 ? I have tried adding the ppa:maas/stable repository but the one that got installed is maas version 1.9.4. Thanks… Lucio

    • dimitern
      December 19, 2016 - 11:01 | Permalink

      Hey Lucio,

      No, I believe you can’t install MAAS later than 1.9 on 14.04 (none of the team’s PPAs have ‘ubuntu/trusty’ versions). You could try upgrading to xenial (16.04) or (beware: YMMV) from source I guess.

      HTH,
      Dimiter

  • Janos Schwellach
    March 28, 2017 - 05:21 | Permalink

    Hi Dimiter,
    first thank you for this excellent guide. I got some used HP ProLiant Servers and I’m running MAAS Version 2.1.3+bzr5573-0ubuntu1 (16.04.1).
    I have all machines listed and network is configured to map our internal network configurations. I can use the machines with MAAS without problems and pings and ssh is working as well.
    However when coming to deploy pinger I experience huge problems. I don’t know what I made wrong but I nailed it down to a possible juju issue or wrong configuration.
    When I just bootstrap a juju controller and add one of the machines to the controller it will bootstrap the machine as expected. However when I run
    juju add-machine lxd:0
    it will run into the following error:
    machine-0: 03:15:51 WARNING juju.provisioner failed to start instance (unable to setup network: no obvious space for container “0/lxd/1”, host machine has spaces: “admin-api”, “compute-data”, “default”, “internal-api”, “public-api”, “storage-cluster”, “storage-data”), retrying in 10s (3 more attempts)
    machine-0: 03:16:01 WARNING juju.provisioner failed to start instance (unable to setup network: no obvious space for container “0/lxd/1”, host machine has spaces: “admin-api”, “compute-data”, “default”, “internal-api”, “public-api”, “storage-cluster”, “storage-data”), retrying in 10s (2 more attempts)
    machine-0: 03:16:11 WARNING juju.provisioner failed to start instance (unable to setup network: no obvious space for container “0/lxd/1”, host machine has spaces: “admin-api”, “compute-data”, “default”, “internal-api”, “public-api”, “storage-cluster”, “storage-data”), retrying in 10s (1 more attempts)
    machine-0: 03:16:22 ERROR juju.provisioner cannot start instance for machine “0/lxd/1”: unable to setup network: no obvious space for container “0/lxd/1”, host machine has spaces: “admin-api”, “compute-data”, “default”, “internal-api”, “public-api”, “storage-cluster”, “storage-data”
    I googled a bit and it seems there was in issue with juju but it got fixed:
    https://bugs.launchpad.net/juju/+bug/1564395
    I tried to dig deeper, but i’m stuck now.
    when I run
    # lxc profile show default
    it prints out this:
    config: {}
    description: Default LXD profile
    devices:
    eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
    name: default

    I thought it might be the problem that I don’t have eth0 and instead enp2s0f0 network devices, however I changed it in one of the nodes in MAAS and it didn’t work as well.

    Any hints suggestions what I can do to dig deeper?

    Thanks a lot,
    Janos

    • djedje
      April 20, 2017 - 17:43 | Permalink

      You must add on bindings config “”: maas-space

      like :

  • Mathias
    May 7, 2017 - 11:16 | Permalink

    Dear Dimiter, dear all,

    first of all a huge thank you for your genius tutorial! Really brilliant!!! I’m now in part 6 and currently facing a first problem I couldnt solve by my own. When I start deploying charms neither your pinger or the openstack bundle, machines very often stuck in pending state … If I run show-machine for that affected machines I got:

    I started the whole deploy process several times and its always different machines affected but always stucking at copying image at different percentage?!?!

    Any ideas about that? I don’t think its a problem with the internet connection …

    Thanks alot! Mathias.

    • dimitern
      May 10, 2017 - 22:49 | Permalink

      Hey Mathias!

      Thank you, and sorry you’re having issues :/
      It looks like the deployment is stuck at downloading the LXD images. From the status output ISTM that’s shortly after bootstrapping – right? If you leave it for a few minutes, does it reach 100% on one of the LXD machines? How about the hardware nodes – do they all come up OK without containers? If so, try deploying a pinger unit on each hardware node, and verify the physical network / VLANs / switches setup is working. Once that’s fine, fixing LXD could be as easy as setting up a caching proxy somewhere on the network, or perhaps just ssh into the host and/or lxd machine and ping manually?

      Hope some of that helps 😉
      Cheers

  • Mathias
    May 13, 2017 - 22:05 | Permalink

    Hey Dimitern,

    Thanks for your answer! The download worked now … not exactly sure what the issue was, … anyway … but now I got another error … I’m not able to deploy the dashboard … in juju show-machine 0 I always got:

    juju spaces shows:

    Any ideas, what I’m doing wrong! Would be very happy about any idea!

    Thanks again!

    • dimitern
      May 15, 2017 - 10:35 | Permalink

      Hey Mathias,

      From the provisioning error it seems the charm deployed to 0/lxd/3 is lacking specific space bindings.
      If that charm is openstack-dashboard, it does not specify bindings inside the bundle. Juju’s behavior has changed and now it seems the charm must specify explicit bindings.

      As I tested, the dashboard was supposed to be deployed inside a LXD container on machine 0 (hence the - lxd:3 above), but it should work on machine 0 as well I guess. YMMV.

      What I can suggest is to try removing all units of openstack-dashboard, wait for them to be gone, and then remove the application itself. Once that’s done, try:

      juju deploy cs:xenial/openstack-dashboard-243 --bind internal-api --to lxd:3

      If that doesn’t work I’d suggest filing a bug against https://launchpad.net/juju-core and/or ask for help in #juju on IRC FreeNode.

      Cheers,
      Dimiter

  • Mathias
    May 15, 2017 - 19:04 | Permalink

    Dear Dimiter, thanks again!

    Dashboard deployed now and I can open the horizon web UI.

    But sorry I need to bother you again :/.

    Now I can’t login 🙁 … I tried admin and openstack as password … but the browser is loading for 2 min and than I got a chrome page is not working error … or in firefox: The connection was reset …

    I tried as well a juju config keystone admin-password="openstack" ... but same error …

    Now I think maybe a missing relation? But if I type

    juju add-relation openstack-dashboard:identity-service keystone:identity-service

    I got …

    ERROR cannot add relation "openstack-dashboard:identity-service keystone:identity-service": relation openstack-dashboard:identity-service keystone:identity-service already exists (already exists)

    Sorry again … any more Ideas … its so close …

    BR Mathias.

    • dimitern
      May 17, 2017 - 00:29 | Permalink

      Hey Mathias,

      I’m sorry, it’s been a while and my “juju-fu” seems to be getting “rusty” 🙂

      You’ll indeed need to re-create the relations between the newly deployed openstack-dashboard app to the other apps it needs to talk to. Once you’ve removed all units and the application itself as I suggested earlier, and re-deployed the app, a few additional steps are necessary (see below).

      Side note: That error message seems to suggest you found a bug, probably worth reporting. Juju should’ve removed the openstack-dashboard:identity-service keystone:identity-service relation (along with its corresponding document stored in MongoDB), when you removed the old openstack-dashboard application. Since the new one was deployed with the same name, it happens to find that state relation doc.

      Anyway, to full re-create the same scenario I’m describing (deployed from scratch with the script + bundle) for openstack-dashboard, do this (you’ve already done the first 2 steps, but it’s probably better to do them again):

      That’s should fix it I think. Keep watching juju status for issues, give it some time to “settle”, try logging into Horizon. And read the charm’s docs (https://jujucharms.com/openstack-dashboard/) for more info. 😉

      Cheers,
      Dimiter

  • Marcelo Silva
    October 26, 2017 - 19:54 | Permalink

    Hello Dimiter,

    Thank you very much for this detailed documentation.

    By the way, did you have a chance to update this documentation to Openstack Pike release?

    Thanks,

    Marcelo

    • dimitern
      October 27, 2017 - 12:01 | Permalink

      Thanks, glad the posts are useful!
      No, I haven’t and frankly won’t have the time for it, sorry :/

  • RakV
    November 21, 2017 - 10:42 | Permalink

    Hi Dimiter,

    Like everyone has already said — AMAZING BLOG. Thanks a lot for the detailed guide.

    I’ve come to this final page of the Juju Openstack setup and have been facing issues with bootstrapping pinger to test my setup. When I run deploy-4-nodes-pinger.sh, I get the following output:

    root@maas:~/pinger# ./deploy-4-nodes-pinger.sh
    2.2.6-trusty-amd64
    ERROR controller pinger-maas-hw not found
    Creating Juju controller “pinger-maas-hw” on maas-hw
    Looking for packaged Juju agent version 2.2.6 for amd64
    Launching controller instance(s) on maas-hw…
    – /MAAS/api/1.0/nodes/node-08412b46-cb22-11e7-b610-0cc47a3a4136/ (arch=amd64 mem=128G cores=48)
    Fetching Juju GUI 2.10.2
    Waiting for address
    Attempting to connect to 192.14.0.102:22

    And it gets stuck at the connection phase where 192.14.0.102 is the node (mentioned using the –to directive) on which I am trying to bootstrap maas-hw. Running “juju status” and other suggestions like “juju debug-log -i unit-pinger-0 –replay –no-tail -l ERROR” and “juju run –unit pinger/0 — ‘cat /var/lib/juju/agents/unit-pinger-0/charm/ping.log'” (as given on the Pinger github page) all give the same error of:

    ERROR no API addresses

    The current setup is: MAAS 1.9 running on Ubuntu 14.04, juju 2.x and four nodes running Ubunutu 16.04.
    PXE booting 14.04 on the nodes threw the “error: ‘curtin’ failed: configuring disk: sda” error and it was given here (https://askubuntu.com/questions/847027/maas-2-1-on-16-10-cannot-deploy-a-trusty-machine-disks-missing) that trusty did not not have the drivers to access disks, so I went with 16.04 on the nodes.

    Any suggestions to successfully bootstrap maas-hw/pinger would be really healpful.

  • Leave a Reply

    Your email address will not be published. Required fields are marked *

    Powered by: Wordpress