MAAS Setup: Deploying OpenStack on MAAS 1.9+ with Juju
This is part 3 of my new “Deploying OpenStack on MAAS 1.9+ with Juju” series. It follows up my last post Hardware Setup: Deploying OpenStack on MAAS 1.9+ with Juju. I planned to write this post almost 2 months ago, and I know some readers were expecting it eagerly, so I apologize for the delay. We were (and still are) pretty busy working on the networking features of Juju. All of the features, presented in this series of posts are available for use, today – with the recent Juju 2.0.1 release (from ppa:juju/stable). In the past articles I explained the high-level OpenStack deployment plan and the hardware setup of the machines and switches. Now it’s time to install MAAS,configure the cluster controller and its managed subnets, spaces, and fabrics. Once that’s done, we can enlist and commission all 4 of the NUCs as nodes and deploy them via MAAS.
Updates
Since I published this article, MAAS 1.9 became the latest stable, replacing 1.8 and earlier releases. MAAS 2.0 (and even 2.1 )is out now, and it comes with a new not-quite-backwards-compatible API 2.0. Juju 2.0.1 released less than a week ago, is the only version on Juju to support MAAS 2.0+. So the setup below assumes MAAS 1.9, but information specific to MAAS 2.0 or Juju 2.0 will be highlighted in green.
Setting up MAAS 1.9+ for Deploying OpenStack with Juju
There are 2 main ways to install MAAS on Ubuntu (14.04 Trusty Tahr or later): Using the “Multiple Server install with MAAS” boot option from the Ubuntu Server installer media, or by installing a few packages with apt-get inside an existing Ubuntu installation. MAAS 2.0 on the other hand requires at least Ubuntu 16.04 (Xenial Xerus). Both options are well described, step-by-step with screenshots in the official MAAS documentation. There’s a slight wrinkle there though – the described steps apply for the latest stable version of MAAS available for trusty (or xenial), which supports the advanced 1.9+ network modelling. Fortunately, the installation steps are almost the same as the nice one-pager “MAAS | Get Started”, so I’ll just list them briefly below.
Installing MAAS
- We need to prepare the machine for MAAS by installing the older Ubuntu Server LTS (14.04) on it. It should be a simple matter of downloading the ISO from http://www.ubuntu.com/download/server and burning it on a CD or better and quicker – on a bootable USB stick (using “Startup Disk Creator” in Ubuntu or any other tool).
NOTE: If you’re installing MAAS 2.0, you’ll need to install the latest Ubuntu Server LTS (16.04) instead! - Once Ubuntu is up and running, log into the console prepare the machine by adding the ppa:maas/stable
(or for MAAS 2.0 ppa:maas/next-proposed)to get MAAS 2.0, installing OpenSSH (so we can manage it remotely), VLAN (so we can create virtual VLAN NICs), and updating/upgrading all packages on the system:# for both MAAS 1.9 on trusty, or 2.0 on xenial: $ sudo add-apt-repository ppa:maas/stable $ sudo apt-get update $ sudo apt-get install openssh-server vlan $ sudo apt-get update $ sudo apt-get dist-upgrade
NOTE: If you’re not using en_US locale (like me), you’ll need to also add the line `LC_ALL=C` to `/etc/default/locale` otherwise some of the packages (e.g. postgresql) MAAS depends on will FAIL to install properly!
- I found it’s better to first configure all network interfaces (NICs) in /etc/network/interfaces and then install MAAS, as it will discover and auto-create the subnets linked to each interface of the cluster controller. The machine needs 2 physical NICs – one for the managed nodes and one for providing external access for the nodes and MAAS itself. Since the HP laptop I’m using does not have more than 1 Ethernet controller, I plugged in a USB2Ethernet adapter to provide access to the nodes network. We need those NICs configured like this:
- Primary physical NIC of the machine (eth0 on trusty or e.g. eno1 on xenial) is the on-board Ethernet controller, configured with a static IP from my home network (192.168.1.104/24 in my case) and uses the home WiFi router as default gateway (192.168.1.1).
- Second physical NIC (eth1 on trusty or e.g. enxxaabbccddeef0 on xenial; it’s wise to rename this to something shorter on xenial) is the USB2Ethernet adapter, configured with a static IP address (10.14.0.1/20) from the managed network.
- 7 Virtual VLAN NICs on top of eth1 for all the VLANs we created earlier (eth1.50, eth1.100, eth1.150, eth1.200, eth1.250, eth1.30) – each of these VLAN NICs have static IPs with the same format (10.<VLAN-ID>.0.1/20 – e.g. 10.250.0.1).
- I’ve edited the /etc/network/interfaces file as root (use your favourite editor or even simply pico) on the MAAS machine and it looks like this now (on trusty): http://paste.ubuntu.com/14567573/. The iptables rules we add on eth0 up/down are to enable NAT so nodes can access the Internet.
- Reboot the machine (both to pick up any kernel updates that might have happened during the `apt-get upgrade / dist-upgrade` call earlier, and to make sure all NICs come up in the right order).
- Now let’s install the needed MAAS packages:
$ sudo apt-get install maas maas-dns maas-dhcp maas-proxy
NOTE: When asked for the Ubuntu MAAS API address, double check the detected URL uses eth0’s (external) IP address: `http://192.168.1.104/MAAS/`. You can later change this by running:
# for MAAS 1.9 on trusty: $ sudo dpkg-reconfigure maas-cluster-controller # for MAAS 2.0 on xenial: $ sudo dpkg-reconfigure maas-rack-controller
Also, double check that running
$ sudo dpkg-reconfigure maas-region-controller
shows the IP address of eth1 (managed NIC), if not set it to 10.14.0.1!
- Create an admin user (I used “root” for username):
# for MAAS 1.9 on trusty: $ sudo maas-region-admin createadmin # for MAAS 2.0 on xenial: $ sudo maas-region createsuperuser
- You should be able to access the MAAS Web UI at http://192.168.1.104/MAAS/ now. Login with the admin username and password you’ve just created.
- While a lot of the following configuration steps can be done from the web UI, a few important ones can only be done via the MAAS CLI client, so let’s install it now (on the client machine you use to access MAAS – e.g. your laptop). You’ll need the MAAS API key for the admin user – copy it from the UI’s top-right menu > Account (or go to http://192.168.1.104/MAAS/account/prefs/). Alternatively, from inside the MAAS server you can run
# for MAAS 1.9 on trusty: $ sudo maas-region-admin apikey --username root # for MAAS 2.0 on xenial: $ sudo maas-region apikey --username root
to get it (assuming the admin user you created is called “root”).
- Once you have the API key run these commands:
$ sudo apt-get install maas-cli # for MAAS 1.9 on trusty: $ maas login <profile> http://192.168.1.104/MAAS/api/1.0/ '<key>' # for MAAS 2.0 on xenial: $ maas login <profile> http://192.168.1.104/MAAS/api/2.0/ '<key>'
Pick a meaningful name for <profile> (e.g. I use “19-root” (or “20-root” for MAAS 2.0) as I run multiple versions of MAAS with multiple users created on them, so I’ll use `$ maas login 19-root http://…`). Replace the ‘<key>’ above with the string you’ve copied earlier from the Account Web UI page (it’s a long string that should contain 3 parts separated with colons, e.g. ‘2WAF3wT9tHNEtTa9kV:A9CWR2ytFHwkN2mxN9:fTnk723tTFcV8xCUpTf85RfQLTeNcX7C’ You should be able to use the CLI after this – to test, try running `version read` and you should see something like this:
$ maas 19-root version read Success. Machine-readable output follows: { "subversion": "trusty1", "version": "1.9.4+bzr4533-0ubuntu1", "capabilities": [ "networks-management", "static-ipaddresses", "ipv6-deployment-ubuntu", "devices-management", "storage-deployment-ubuntu", "network-deployment-ubuntu" ] }
MAAS UI may complain there are no boot images imported yet, but that’s fine – we’ll get to that once we need to add the NUCs as nodes.
Configuring Cluster (Rack) Controller Interfaces
Now we have MAAS up and running and it’s time to configure the manged cluster controller (a.k.a. rack controller in MAAS 2.0) interfaces before we continue with the rest (zones, fabrics, spaces, subnets). Either from the web UI (as outlined in the Get Started quick guide) or from the CLI, we need to update all cluster controller interfaces so that eth1 and all VLAN NICs on it are managed for DNS and DHCP, have default gateway and both DHCP and static ranges set. Here’s a screenshot of how it looks like after we’re done (for MAAS 1.9 on trusty):
In MAAS 2.0 the same information can be found in the Nodes page -> Controllers (1) , clicking on the only controller and checking the similar Interfaces section at the bottom of the page.
To achieve this using the CLI, run the following commands:
- Get the UUID of the controller, e.g. `5d5085c8-34fe-4f86-a338-0450a49bf698` (in MAAS 2.0 the equivalent ID would be e.g. `4y3h7n`):
# for MAAS 1.9 on trusty: $ maas 19-root node-groups list | grep uuid # for MAAS 2.0 on xenial: $ maas 20-root rack-controllers read | grep system_id
NOTE: To configure rack controller interfaces in MAAS 2.0, you can use the same CLI commands used for regular machines (a.k.a. nodes). We will discuss configuring nodes networking in more detail in the next post. However, MAAS 2.0 auto-detection of controller interfaces (and their subnets and VLANs) works better than in MAAS 1.9. So the steps below can be skipped for MAAS 2.0, provided you edit the
/etc/network/interfaces
on the controller to include all interfaces you want to manage, and then reboot to let MAAS detect the changes. Here is a paste of the/etc/network/interfaces
contents I used for MAAS 2.0 (remember toapt-get install vlan
to ensure the VLAN interfaces work): http://paste.ubuntu.com/16189887/. The next few steps only apply to MAAS 1.9 and earlier. - Update the external NIC eth0 to be unmanaged and set the default gateway:
$ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth0 ip=192.168.1.104 interface=eth0 management=0 subnet_mask=255.255.255.0 \ router_ip=192.168.1.1
- Update the internal NIC eth1 used to control the nodes to be managed and have both DHCP (10.14.0.40-10.14.0.90) and static IP (10.14.0.100-10.14.1.200) ranges set:
$ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth1 ip=10.14.0.1 interface=eth1 management=2 subnet_mask=255.255.240.0 \ ip_range_low=10.14.0.40 ip_range_high=10.14.0.90 \ static_ip_range_low=10.14.0.100 static_ip_range_high=10.14.1.200
- Update the eth1.99 VLAN NIC – it needs to be unmanaged, as it will be used by OpenStack Neutron Gateway to provide DHCP for OpenStack guest instances:
$ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth1.99 ip=10.99.0.1 interface=eth1.99 management=0 subnet_mask=255.255.240.0 \ ip_range_low=10.99.0.40 ip_range_high=10.99.0.90 \ static_ip_range_low=10.99.0.100 static_ip_range_high=10.99.1.200
- Update all remaining VLAN NICs the same way (DHCP and static IP ranges, default gateway, managed):
$ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth1.30 ip=10.30.0.1 interface=eth1.30 management=2 subnet_mask=255.255.240.0 \ ip_range_low=10.30.0.40 ip_range_high=10.30.0.90 \ static_ip_range_low=10.30.0.100 static_ip_range_high=10.30.1.200 $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth1.50 ip=10.50.0.1 interface=eth1.50 management=2 subnet_mask=255.255.240.0 \ ip_range_low=10.50.0.40 ip_range_high=10.50.0.90 \ static_ip_range_low=10.50.0.100 static_ip_range_high=10.50.1.200 $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth1.100 ip=10.100.0.1 interface=eth1.100 management=2 subnet_mask=255.255.240.0 \ ip_range_low=10.100.0.40 ip_range_high=10.100.0.90 \ static_ip_range_low=10.100.0.100 static_ip_range_high=10.100.1.200 $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth1.150 ip=10.150.0.1 interface=eth1.150 management=2 subnet_mask=255.255.240.0 \ ip_range_low=10.150.0.40 ip_range_high=10.150.0.90 \ static_ip_range_low=10.150.0.100 static_ip_range_high=10.150.1.200 $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth1.200 ip=10.200.0.1 interface=eth1.200 management=2 subnet_mask=255.255.240.0 \ ip_range_low=10.200.0.40 ip_range_high=10.200.0.90 \ static_ip_range_low=10.200.0.100 static_ip_range_high=10.200.1.200 $ maas 19-root node-group-interface update 5d5085c8-34fe-4f86-a338-0450a49bf698 \ eth1.250 ip=10.250.0.1 interface=eth1.250 management=2 subnet_mask=255.255.240.0 \ ip_range_low=10.250.0.40 ip_range_high=10.250.0.90 \ static_ip_range_low=10.250.0.100 static_ip_range_high=10.250.1.200
- Verify the changes by listing all NICs of the cluster controller again:
$ maas 19-root node-group-interfaces list 5d5085c8-34fe-4f86-a338-0450a49bf698
The last command should return output similar to this one: http://paste.ubuntu.com/14575066/ For the VLAN NICs the VLAN ID part of the IPs, ranges, and the interface name changes.
Setting up Fabrics, VLANs, Spaces, and Subnets
Next step is to set up 2 MAAS fabrics: I’ve chosen “maas-external” (containing the external 192.168.1.1/24 subnet for eth0) and “maas-management” (containing everything else). By default MAAS creates one fabric per physical NIC it discovers in /etc/network/interfaces during installation. So at this point you should have fabric-0 containing an “untagged” VLAN and the external subnet linked to eth0 (192.168.1.0/24) and fabric-1, which also contains an”untagged” VLAN and as many “tagged” VLANs as discovered from the /etc/network/interfaces.
NOTE: MAAS 2.0 CLI commands for fabrics, spaces, VLANs, and subnets are identical (only the profile name should differ – e.g. 19-root vs 20-root).
Running:
$ maas 19-root fabrics read
should give you output like this http://paste.ubuntu.com/14568771/. This is almost what we need, but let’s change the names of the fabrics to reflect their intended usage:
$ maas 19-root fabric update 0 name=maas-external $ maas 19-root fabric update 1 name=maas-management
You might have noticed MAAS created a default space called space-0 and all subnets are part of it, as you can see the Subnets page in the UI or by running:
$ maas 19-root subnets read
This space-0 will be used when no explicit space is specified for any (new) subnet. We’ll rename it to “default” and also create all the other spaces we need for deploying OpenStack:
- Rename space-0 to default
$ maas 19-root space update 0 name=default
- unused space will contain the external subnet only
$ maas 19-root spaces create name=unused
- admin-api space will contain VLAN 150
$ maas 19-root spaces create name=admin-api
- internal-api space will contain VLAN 100
$ maas 19-root spaces create name=internal-api
- public-api space will contain VLAN 50
$ maas 19-root spaces create name=public-api
- compute-data space will contain VLAN 250
$ maas 19-root spaces create name=compute-data
- compute-external space will contain VLAN 99
$ maas 19-root spaces create name=compute-external
- storage-data space will contain VLAN 200
$ maas 19-root spaces create name=storage-data
- storage-cluster space will contain VLAN 30
$ maas 19-root spaces create name=storage-cluster
Now we can update all subnets to set meaningful names and a default gateway for each and also to associate them with the correct spaces. To do that we need to use the MAAS IDs for spaces, same for subnets, but fortunately there’s a neat trick we can use here: prefixed references – e.g. instead of `”2″` (a subnet ID) use `”vlan:50″` (i.e. the subnet in VLAN with ID 50 – if there is more than one subnet in VLAN 50, it won’t work as it won’t uniquely identify a single subnet). Another prefixed reference for subnets is for example `”cidr:192.168.1.0/24″` to select the unmanaged external subnet. We still need the space IDs, so we’ll first list them all and then copy their IDs in the subsequent commands to update each subnet. If we created the spaces in the order given above, they will have increasing IDs starting from 2, so that’s makes it slightly easier.
- List all spaces and get their IDs:
$ maas 19-root spaces read
- Move the unmanaged subnet of eth0 to space “unused” and call ot “maas-external”:
$ maas 19-root subnet update cidr:192.168.1.0/24 name=maas-external space=1
- Rename the managed subnet (used for PXE booting the nodes) 10.14.0.0/20 to “maas-management”:
$ maas 19-root subnet update cidr:10.14.0.0/20 name=maas-management
- Move all VLAN subnets to their respective spaces, set a name and default gateway for each:
$ maas 19-root subnet update vlan:150 name=admin-api space=2 gateway_ip=10.150.0.1 $ maas 19-root subnet update vlan:100 name=internal-api space=3 gateway_ip=10.100.0.1 $ maas 19-root subnet update vlan:50 name=public-api space=4 gateway_ip=10.50.0.1 $ maas 19-root subnet update vlan:250 name=compute-data space=5 gateway_ip=10.250.0.1 $ maas 19-root subnet update vlan:99 name=compute-external space=6 gateway_ip=10.99.0.1 $ maas 19-root subnet update vlan:200 name=storage-data space=7 gateway_ip=10.200.0.1 $ maas 19-root subnet update vlan:30 name=storage-cluster space=8 gateway_ip=10.30.0.1
MAAS 2.0 specific steps
NOTE: These steps assume you’ve pre-populated the /etc/network/interfaces/
of the rack controller as suggested earlier, so the fabrics, VLANs and subnets are auto-detected and created by MAAS. We only need to set up DHCP for the managed subnets.
- Create dynamic IP ranges for all managed subnets (i.e. all except the maas-external 192.168.1.0/24 and compute-external 10.99.0.0/20). No need to make the ranges large, we’ll use 10.X.0.40-10.X.0.90 (X is the subnet’s VLAN VID or 14 for 10.14.0.0/20):
$ maas 20-root ipranges create type=dynamic start_ip=10.14.0.40 end_ip=10.14.0.90 $ maas 20-root ipranges create type=dynamic start_ip=10.30.0.40 end_ip=10.30.0.90 $ maas 20-root ipranges create type=dynamic start_ip=10.50.0.40 end_ip=10.50.0.90 $ maas 20-root ipranges create type=dynamic start_ip=10.100.0.40 end_ip=10.100.0.90 $ maas 20-root ipranges create type=dynamic start_ip=10.150.0.40 end_ip=10.150.0.90 $ maas 20-root ipranges create type=dynamic start_ip=10.200.0.40 end_ip=10.200.0.90 $ maas 20-root ipranges create type=dynamic start_ip=10.250.0.40 end_ip=10.250.0.90
- Enable DHCP (using the created IP ranges) on all VLANs of the maas-management fabric, except compute-external (VID: 99). We need to pass the rack controller ID explicitly (get it from `$ maas 20-root rack-controllers read | grep system_id`):
$ maas 20-root vlan update 1 0 name=maas-management dhcp_on=True primary_rack=4y3h7n $ maas 20-root vlan update 1 30 name=storage-cluster dhcp_on=True primary_rack=4y3h7n $ maas 20-root vlan update 1 50 name=public-api dhcp_on=True primary_rack=4y3h7n $ maas 20-root vlan update 1 100 name=internal-api dhcp_on=True primary_rack=4y3h7n $ maas 20-root vlan update 1 150 name=admin-api dhcp_on=True primary_rack=4y3h7n $ maas 20-root vlan update 1 200 name=storage-data dhcp_on=True primary_rack=4y3h7n $ maas 20-root vlan update 1 250 name=compute-data dhcp_on=True primary_rack=4y3h7n # Rename the unmanaged VLAN with VID:99 to compute-external (optional; for readability): $ maas 20-root vlan update 1 99 name=compute-external
After those commands your Subnets page in the MAAS UI should look like the following screenshot (in MAAS 1.9, for MAAS 2.0 look at the Networks page for the same information, albeit displayed slightly differently):
Importing Boot Images and Next Steps
We’re almost ready to use our new MAAS. Three more steps remain:
- Importing boot images to use for deployments of nodes
- Enlisting all 4 NUCs as nodes.
- Accepting and commissioning all nodes.
The first step can be done from the UI or the CLI. We’ll need amd64 Ubuntu 14.04 LTS (Trusty) and Ubuntu 16.04 LTS (Xenial) images only for both MAAS 1.9 and 2.0. Go to the web UI “Images” page, check “14.04 LTS” and “16.04 LTS” for Ubuntu release and “amd64” for Architecture, then click “Import images”. Sit and wait – with a reasonably fast Internet connection it should take only a few minutes (less than 800 MB download for the 14.04 amd64 image).
Alternatively, with the CLI you can run (same command for both MAAS 1.9 and MAAS 2.0):
$ maas 19-root boot-resources import
No need to change the boot images selections as by default 14.04/amd64 is selected (in MAAS 1.9, the default is 16.04 in MAAS 2.0). You can watch as the UI auto-updates during the 2 phases – region and cluster import. When done the UI should look like this (NOTE: at the time of writing, 16.04 LTS was not yet out, so the screenshot is somewhat outdated):
Next Steps: Nodes Networking
I’ll stop here as the post again got too long, so if you’re still following – thanks! – and stay tuned for the next post in which I’ll describe adding the nodes to MAAS, including the Intel AMT power parameters needed by MAAS to power the nodes on and off, as well as how the node network interfaces should be configured to deploy OpenStack on them.
I’d like to thank my readers for the encouragement to continue doing these posts and for the feedback. A few mistakes in the original version of the post were fixed – Thanks Toto!
Convenient links to all articles in the series:
- Introduction
- Hardware Setup
- MAAS Setup (this one)
- Nodes Networking
- Advanced Networking
- Finale
Thank you so much for this! I’ve been working on something like this for weeks and your previous blog posts were invaluable. But I was really struggling with the MAAS setup. I can’t wait for the next post!
Thanks Mike!
I’m glad my posts were useful so far. I’ll try to finish the next part and post it this week.
Thanks so much, I’ve been waiting for this update and wouldn’t have guess half the steps.
Also had to scratch everything because I tried something in the subnet configuration and couldn’t move them from one fabric to the other afterward.
A few corrections:
– last step of configuring the interface should be “maas 19-root node-group-interfaces list $UUID”
– to list the fabrics I had to type “maas 19-root fabrics read”
– the cidr 10.10.* for the unused space wasn’t used before, probably 192.168…
Cheers Toto,
I’m happy you found it useful, and thanks for the corrections – all of those are now fixed!
Ok, so maybe I am doing something wrong. My MAAS controller has 4 NICs, which are bonded into two bonded ports, bond0 and bond1. We are using VLANs for all the networks. Compute and storage networks are running off of bond1, everything else is running off bond0. Also bond0 is set to have an IP in the untagged network for PXE boot. Bond1 is not set for any network, just for the VLANs
When I initialize the cluster controller, I do not get 2 fabrics, I get 4. I get 1 fabric for everything on bond0, then 1 new fabric for each network off bond1. How can I get MAAS to just create a fabric for each physical port?
I figured out how to fix it via the CLI, but that is tedious. I had to basically destroy all the subnets and VLANs, and recreate them in the correct fabric. Is there a way to move a VLAN or subnet from now fabric to another?
Hey Mike,
Let me first try to understand what are you trying to achieve. You have 4 NICs, first two bonded in bond0, last two bonded in bond1. That’s fairly typical setup for telcos – providing redundancy of the links, and using VLANs on top of the bonds for traffic segregation, isolation, and security. With only 2 bonds and 4 NICs part of them, you still need to provide a way for both MAAS itself and its nodes to access the Internet, unless you want a fully offline setup. I managed to replicate your setup in a fresh install of MAAS 1.9.0 inside a KVM instance with 4 NICs, all on the same libvirt isolated network (a bridge without DHCP enabled and no NAT etc.). But I needed to set up a squid3 proxy on my machine first (for the VM to be able to download packages), and finally as this didn’t work for e.g. add-apt-repository, I had to enable NAT on my laptop for the libvirt network, otherwise nothing apart from apt-get worked.
After some trial and error I got MAAS installed on the VM, where I used the following /etc/network/interfaces pre-configured: http://paste.ubuntu.com/14598628/ and following this to set up the two bonds: http://www.lylebackenroth.com/blog/2009/02/13/how-to-set-up-dual-nic-bonding-in-ubuntu/
Looking at the UI I can see 5 fabrics – first containing everything on bond0, and the other 4 each containing an “untagged” VLAN and one of the remaining VLANs on bond1. So I can confirm MAAS detects and creates separate fabrics for each VLAN not carried by the primary NIC (the one with the default gateway set – bond0 in my case).
I’m not aware of a way to control how many and what fabrics MAAS creates, so fixing it manually seems the way to go for now, and I’m glad my post possibly helped you to figure out how to do it. I think the behavior you expect is worthy of a bug report against MAAS in Launchpad.
Thanks for the reply, and I’m glad its not just me! I did figure out how to use the MAAS CLI to re-organize everything into two fabrics, one for each bond port. So that is working swimmingly now. I think I will file a bug as that does not seem like correct behavior during initial start-up. I also wish they had a better way of editing fabrics and spaces from the Web UI, but 1.9 is pretty new.
I don’t know if you’ve run into this yet, but there is a bug with regards to Juju and deploying MAAS servers that use VLANs. It seems to create a bunch of misconfigured bridge ports for the VLANs. In order to make it work I had to disable Juju’s network management, which means that it can’t deploy charms to LXC containers. Here’s the bug: https://bugs.launchpad.net/juju-core/+bug/1532167
And yes, your posts have been a huge help for me and my team in deciphering how to deploy Openstack. Looking forward to the next post!
Yeah, we recently modified the script we use to create bridge devices on MAAS-deployed nodes in Juju. We have a much better version, which handles correctly VLANS, bonds, multiple dns-nameservers, in addition to creating a bridge for each usable NIC. It’s in a feature branch, but AIUI it’s likely to release Juju 1.25.3 stable with a fix for the linked bug. It’s otherwise planned to use the new and shiny bridge script by default in Juju 2.0 (alpha1 is out now).
Hi Dimiter, Nice work! I am finally getting somewhere with MAAS.
Mike and Dimiter, would you please post how to fix fabrics using CLI? You mention that “I had to basically destroy all the subnets and VLANs, and recreate them in the correct fabric.” Would you please post the details as to how you accomplish this. Thanks!
Great work – keep em coming!
This is great stuff, really thorough!
Just something I had to do, not sure if it’s just me, but I had to enable ipv4 packet forwarding before my ip-tables rule would pass traffic.
hello,
yes, I had to enable ip forwarding
in /etc/sysctl.conf
net.ipv4.ip_forward=1
then reboot.
Hello,
Thank you for this project!
I’m trying to set up a simple maas environment to deploy openstack on top. in basic setup (2 physical network, public and private) I have no problem.
I have some problem with maas in vlan env. in your post you said:
” By default MAAS creates one fabric per physical NIC it discovers in /etc/network/interfaces during installation. So at this point you should have fabric-0 containing an “untagged” VLAN and the external subnet linked to eth0 (192.168.1.0/24) and fabric-1, which also contains an”untagged” VLAN and as many “tagged” VLANs as discovered from the /etc/network/interfaces. ”
but I have 1 space with all 3 network (1 untagged and the 2 tagged vlans) and 3 fabrics, one for each network.
I cannot move (maybe my fault) one vlan in the fabric of the other vlan.
maas version: 1.9.0+bzr4533-0ubuntu1~vivid1
spaces read: http://paste.ubuntu.com/15246725/
fabrics read: http://paste.ubuntu.com/15246713/
vid 300 private, vid 301 public
can you help me? maybe i missed something.
thank you for any help and for this great job
Hey Marco,
Just to clarify, there is always one “untagged” VLAN (VID=0) for each fabric, which cannot be removed.
As for the single space, that’s to be expected – “space-0” is created by MAAS automatically.
You can create additional spaces and update subnets to be part of them, as described in the post.
Sadly, MAAS CLI/API does not offer a way to change the fabric of an existing VLAN.
You can rearrange the fabrics and VLANs the way you like though, using the MAAS CLI tool.
If I understand you correctly, you’re following the post’s instructions and want to have 2 fabrics
instead of 3. The first fabric should have only the default untagged VLAN (as fabric-0 already is
according to your paste), while the other contains both that default untagged VLAN and 2 more
tagged VLANs with VID=300 and VID=301. You only need to move the 301 VLAN in fabric-1, as the other
is there already. So how about (assuming your MAAS CLI profile is called “19-root”):
$ maas 19-root vlans 1 create vid=301 name=public
# create a new 301 VLAN in fabric 1; store the new VLAN’s id in say $VLAN_301_ID
$ maas 19-root vlan update 1 300 update name=private
# update the existing 300 VLAN’s name
$ maas 19-root subnet update 3 vlan=$VLAN_301_ID
# update the VLAN of the public subnet 10.22.0.0/16 to the newly created VLAN 301 in fabric-1
$ maas 19-root vlan delete 2 301
# finally, delete the old VLAN 301 in fabric-2, now unused.
I hope this helps, and thanks for visiting 🙂
Thank you, now I have 2 fabrics with correct vlans inside.
commissioning nodes is ok and networks are correcly configured.
Thank You for your help and keep up the good work!
I can’t wait to read next post.
Hi;
Multiple Vlans in one fabric worked for me also. Awesome. Waiting for the next posts about juju and openstack deployment
Hello and thank you for your great post. i’m evaluating all sort of metal provisionning, and now it’s time for MAAS.
I’m following your steps but I’m not sure if I missed something. I have use the ubuntu server installer with “install MAAS server” option. Now that I have a MAAS server (VM instance for now) with all the network attached, I have an error when I try to use the command line to create the networks:
maas my-maas node-group-interface update b05e1344-6744-4150-863e-3af637eaa20b eth8 ip=10.14.0.1 interface=eth8 management=2 subnet_mask=255.255.240.0 ip_range_low=10.14.0.40 ip_range_high=10.14.0.90 static_ip_range_low=10.14.0.100 static_ip_range_high=10.14.1.200
Not Found
What is the not found?
i’ve re-installed from scratch wihout chosing MAAS server install. Then I did the apt-get install maas and I see the interfaces in the clustber tab. (i actually saw your note saying interfaces chould be there before doing the maas install, not sure why I missed that the first time).
Hey, I’m glad you managed to get it to work. Yeah, I found the pre-populating /etc/network/interfaces as you like works simpler with newer versions of MAAS, as it auto-detects them.
Just a question, i seem to be struggling with a DHCP pool being full. The front end insists that the pool is at 98% but the dhcpd.leases files shows no more then 17 leases in there. Where does the maas read subnet utilisation from? I also removed everything and reinstalled maas again and as soon as i add a subnet it goes back to being 98% full.
Hey, you can check the web UI: Networks -> click on a subnet, scroll down to Utilisation.
If your MAAS was upgraded from 1.9 to 2.0 I’d double check the reserved IP ranges were correctly migrated, and if not, remove some of them to free up available IPs for DHCP.
Hey thanks for the quick replay. I have checked that and it shows 98% utilisation which is not correct. I have 10 boxes on the network and have deleted a previous setup of the openstack and now i am starting again. My question is how do i remove them? How do i clean up? dhcpd.leases file has 17 IP ‘s in there? Where is maas getting this utilisation from (database of sorts? file in the OS?) thats my question and how do i manually clean it up?
Depending on the exact MAAS 2.0 version you use (look at the lower left side of the footer in the web UI – I’m using the latest 2.0.0 (beta5+bzr5026)), you can use the CLI or UI (in beta5) to change the reserved IP ranges.
Using the CLI, I’d use `maas ipranges read` to list them and `maas iprange delete` to clean individual ranges.
Let me know if it works.
Ok i am using MAAS 1.9.2 and i don’t see ipranges command in the cli i do see ipaddresses if i run maas ipaddresses i see other options like read release and reserve. If i read i get empty result but is successful. Then release seems to want a specific ip i would like to release a whole /23. I guess i could go up to release 2.0 if need be.
OK, so for 1.9.2 things should be a bit simpler (no need to upgrade). Check both of these two: 1) in the web UI -> Clusters, look at the Interfaces list. You should see a picture similar to the one in ‘Configuring Cluster (Rack) Controller Interfaces’ section. 2) You can edit each interface to make sure it has both DHCP and static IP ranges like I described in the post above (smaller DHCP range, larger static range).
I suspect your static and/or DHCP ranges are misconfigured. As far as MAAS is concerned, the dhcpd.leases file is polled for changes to update the database with “observed” and “assigned” IP addresses for each node and each managed subnet. So, removing the leases file and rebooting the MAAS machine might also fix your issue.
This is great thanks for the help. I don’t know why but only after increasing static range in DHCP the usage dropped :).
I may be missing something here, or it may be not really matter, though my nodes never really finish commissioning. From Ubuntu autopilot instructions: In your MAAS user’s settings (top-right corner) add your personal public SSH key so you can later log into the nodes. Key points / questions: a) my initial user during 14.04 install was not the same as my MAAS region admin, so how is the region admin’s ssh key created, b) can it be the initial user’s public ssh key (after ssh-keygen creation), or c) is there some other maas-cli magic I can use to correct this. Had not seen your posts until today, but will be eager to try the multi-zone across a few Juniper EX2200 switches. Using Lenovo Tinys vs NUCs, and PXE is working correctly with etherwake. Thanks in advance for the knowledge transfer via posts, and any insight you can share regarding my ssh key dilemma.
update 1: re-installed maas, no longer concerned about ssh key, 4 nodes sitting in ready state, openstack-install, juju bootstrap hangs with 1 node deployed? re-reading all blog parts. approaching the expletive phase of this build.
Sorry, I’ll try to answer by tomorrow, as I’m travelling currently.
Hi,
i am working through your very helpful guide with MAAS 1.9+ on Trusty 14.04.4 LTS
but am seeing a different Fabric layout after installing MAAS with each vlan/subinterface listed as its own fabric rather than a fabric per interface.
I am using 2 bonded interfaces so slightly different but i would not expect this to be an issue as bondo (Native managment) interface is detected and works for comissioning.
The only other difference is that i do not have a configuration on the bond1 root interface (native vlan) but again this normal security practice, an extract from my interface file:
auto eth0
iface eth0 inet manual
bond-master bond1
auto eth1
iface eth1 inet manual
bond-master bond1
auto bond1
iface bond1 inet manual
mtu 9000
bond-mode 802.3ad
bond-miion 100
bond-lacp-rate 1
bond-slaves eth1 eth2
auto bond1.16
iface bond1.16 inet static
mtu 9000
address 10.AA.BB.21
netmask 255.255.255.0
vlan-raw-device bond1
auto bond1.17
iface bond1.17 inet static
mtu 9000
address 10.AA.CC.21
netmask 255.255.255.0
vlan-raw-device bond1
i was wondering if you could post your interfaces file for reference ?
Cheers in advance
James
Hi Dimitier,
First of all I would like to really appreciate this blog, as it’s one of most insightful and detailed guide of Maas implementation and I really used it for one of my deployments,
Now I have an issue here that may be you can help me solve,
I use the same networking plan as recommended here,
However I am unable to use the eth2 untagged Maas management network,
While booting I am stuck with this error and I am not sure what could be the reason that I am encountering this issue –
url_helper.py[WARNING]: Calling ‘http://169.254.169.254/2009-04-04/meta-data/instance-id’ failed [70/120s]: request error [HTTPConnectionPool(host=’169.254.269.254′, port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by : [Errno 101] Network is unreachable)]
When I fall back to Maas external networks everything works fine,
Kindly suggest If something can be done here,
Beat Regards,
Subhranshu
MAAS api server is running on 169.x.x.x. Instance is querying about metadata on that IP which is unreachable.
run dpkg-reconfigure maas-rack-controller
set the ip to IP on eth2
Dear thanks for very good stuff, but I am facing one issue, Node is commissioned but disk did not shown up whereas CPU and RAM is shown,
kindly help me out.
thanks in advance/
Hi Dimiter,
Very useful article as I am currently installing an OPNFV environment bundled with MaaS and Juju.
I have tried and retried my MaaS setup but get the same result…. I can’t see disk information in the MaaS GUI. This is a show stopper as I can’t install Juju as the script I used apparently checks this kind of information.
I see in various post, forums, and Ubuntu bugs report that we are a considerable number of people having this issue.
As I am currently deploying MaaS, Juju Bootstrap VM and Node Controller into VMs, do you think it can be a problem ?
Thank you
WARNING: if one renames an interface (thanks udev for unreadable names), then you’ll NOT be able to use VLANs as they will be names “rename99” (digits may differ) in 14.04 and, perhaps, in 16.x too.
Only interfaces with “predictable” names may have configurable VLANs.
Yepp, one may still use vconfig/ip to setup the VLANs, but it’s way too ugly.
That cost me a couple of hours…
Otherwise ok.