VMware Cloud on AWS – Let’s create our first VMware SDDC on AWS!

VMware Cloud on AWS quick overview

–edited on January, 2018 to align with some changes in the Service–

VMware Cloud on AWS has been released two years ago and has got a lot of impressive positive feedback from customers.
There are tons of official and unofficial blog posts out there explaining what the VMware Cloud on AWS service is, the advantages for customers and all the use cases, so I’ll give you just a quick overview:
VMware Cloud on AWS is a unified SDDC platform that integrates VMware vSphere, VMware vSAN and VMware NSX virtualization technologies, and will provide access to the broad range of AWS services, together with the functionality, elasticity, and security customers have come to expect from the AWS Cloud.
Integrates VMware’s flagship compute, storage, and network virtualization products (vSphere, vSAN and NSX) along with vCenter management, and optimizes it to run on next-generation, elastic, bare-metal AWS infrastructure.
The result is a complete turn-key solution that works seamlessly with both on-premises vSphere based private clouds and advanced AWS services.
The service is sold, delivered, operated and supported by VMware. The service is delivered over multiple releases with increasing use-cases, capabilities, and regions.

VMware Cloud on AWS

SDDC Creation Steps

The first step we have to do is connecting to the VMC on AWS console, pointing to the following URL https://vmc.vmware.com/console/
The landing page provides an overview of the available SDDCs (if any).

VMware Cloud on AWS

Create SDDC

To create a new SDDC, we have to click on the “Create SDDC” button.

SDDC Properties

The SDDC creation wizard starts, we must choose an AWS Region that will host the SDDC, we must give the SDDC a unique name, and we must select the number of ESXi Hosts our Cluster will be made of. The minimum number of Hosts for a production Cluster is 3. You can create a 1-node Cluster for test and demo purposes, this single Host Cluster will expire under 30 days and can be converted to a full SDDC before expiration.

VMware Cloud on AWS

Stretched Cluster option

When we Select Multi-Host for a production deployment, we can choose to have our SDDC (vSphere Cluster) hosted in a single AWS Availability Zone (one subnet) or distributed across two AZs on two different subnets (vSphere Stretched Cluster).

VMware Cloud on AWS

Connect AWS Account

The next step in the wizard is to choose an AWS Account that will be connected to the VMware Cloud account. This enables us to choose the VPC and Availability Zone(s) where we want our SDDC to be Hosted. In the case we’ll use native AWS Services, these will be charged on this AWS Account.

VMware Cloud on AWS

Choose VPC and Subnet (Availability Zone)

In the next step we must choose the VPC and Subnet that will host our SDDC.

VMware Cloud on AWS

Management Subnet CIDR

The final step of the wizard is to choose a CIDR for the Management Network. This step is optional and you can leave the default, being sure that the default CIDR doesn’t overlap with any network that will connect to the SDDC (e.g. on-premises network that will connect to the SDDC trough a VPN connection). We can now deploy the SDDC.

VMware Cloud on AWS

Check SDDC creation progress

The progress window will show. As you can see, we are going to have our 4-node SDDC ready in less than 2 hours!

VMware Cloud on AWS

New SDDC deployed

Once deployed, we’ll be able to see our brand new SDDC under the SDDCs tab in the console.

VMware Cloud on AWS

SDDC details

Clicking on “VIEW DETAILS” we can access the SDDC Summary and all the available options such as adding and removing Hosts from the Cluster or accessing the network configuration.

VMware Cloud on AWS

Add a new Host

Let’s add a new Host to our SDDC. It’s simple like clicking on “ADD HOST”. If this new Host is only needed to manage a burst in our compute power needs, we can simply remove the Host when it will not be needed anymore and we’ll have an additional charge, for the additional capacity we added, only for the time frame the additional Host existed.

VMware Cloud on AWS

Specify number of Hosts to add

We can specify how many Hosts we want to add, till the maximum supported size of 16 Hosts per Cluster.

VMware Cloud on AWS

New Host(s) addition task progress

We’ll see a task in progress for the new Host addition to the Cluster.

VMware Cloud on AWS

Expanded SDDC

After a few minutes, we’ll have our SDDC made of 5 Hosts.

VMware Cloud on AWS

Manage SDDC Networking

One we have our SDDC in place, we’ll need to manage it remotely and to configure firewall and NAT rules to publish services. This is managed in the Network tab. Once we enter the network configuration tab, the first thing we are shown is a very nice diagram that highlights the network and security configuration of our SDDC.
Here we can see the Management and Compute Gateway configuration overview and any VPN or Firewall rule we have in place.

Management Gateway

Scrolling Down we can see the Management Gateway section, where we can create and manage IPsec VPNs and Firewalling to/from the Management Network.

VMware Cloud on AWS

Compute Gateway

Under the Compute Gateway section we can create and manage IPsec VPNs, L2VPNs, Firewall Rules, NAT to/from the Compute Networks, where our workloads reside.

VMware Cloud on AWS

Direct Connect

The last section we find under the Network tab is the Direct Connect section. Here we can manage the Virtual Interfaces (vifs) in case we have a Direct Connect in place to connect our SDDC with another on-premises or Service Provider hosted environment.

VMware Cloud on AWS

Tech Support real-time CHAT

In the bottom right corner of the console you can always find the Chat button. This is a fantastic feature that enables you to have real-time support from VMware Technical Support.

VMware Cloud on AWS

SDDC Add Ons

In the Add Ons tab we can manage the available add ons to the VMware Cloud on AWS offering: Hybrid Cloud Extension and Site Recovery.
Hybrid Cloud Extension is included in the VMware Cloud on AWS offering and enables us to seamlessly migrate workloads from remote vCenters to the SDDC.
Site Recovery is a paying add on that enables our SDDC as a target for Disaster Recovery from remote vCenters.

VMware Cloud on AWS

SDDC Troubleshooting

The troubleshooting tab gives us a tool to check and validate connectivity for a selected use case.

VMware Cloud on AWS

SDDC Settings

The settings tab provides us the overview of all the main settings for the SDDC.

VMware Cloud on AWS

SDDC Support

The Support tab provides us all the information we should provide to Technical Support when needed.

VMware Cloud on AWS

This concludes the creation of our first SDDC in VMware Cloud on AWS.
In a couple of hours we can have a powerful VMware full-stack SDDC deployed in AWS, enabling us to quickly respond to a lot of use cases such as Disaster Recovery, Geo expansion and global scale, bursting.
What a great stuff!

 

VMware Integrated Containers Networking with NSX

Recently I had the chance to work on a PoC on VMware Integrated Containers (VIC).
VIC enables you to work with Docker Containers leveraging the full feature set of vSphere (HA, DRS, vMotion etc).
The logic used in VIC is to map every single Container to a micro-VM. Having a single Container in a VM provides the capability to leverage NSX Micro-Segmentation to secure Applications, and to leverage all the NSX fetures like Edge Gateways, Logical Routers, Logical Switches.
The official VIC documentation can be found at the following URL: https://vmware.github.io/vic-product/assets/file/html/1.1/§
In the official documentation you can find an excellent starting point to understand the VIC logic and mapping to Docker constructs.

In VIC, Containers are created into a Virtual Container Host (VCH), that maps to a vSphere vApp.
The VCH vApp contains an Endpoint VM that provides the Management and Networking functions.
All the Container micro-VMs are instantiated in the scope of a VCH vApp.

This is the Networking logic in VIC. As of version 1.1, each VCH can have a maximum of 3 Network Interfaces.

Public Network: The network that container VMs use to connect to the internet. Ports that containers expose with docker create -p when connected to the default bridge network are made available on the public interface of the VCH endpoint VM via network address translation (NAT), so that containers can publish network services.

Bridge Network: The network or networks that container VMs use to communicate with each other. Each VCH requires a unique bridge network. The bridge network is a port group on a distributed virtual switch.

Container Network: Container networks allow the vSphere administrator to make vSphere networks directly available to containers. This is done during deployment of a VCH by providing a mapping of the vSphere network name to an alias that is used inside the VCH endpoint VM. You can share one network alias between multiple containers.

For the scope of this PoC, I’ve designed the following Architecture:

The main goals I want to achieve are the following:

  • Leverage NSX Logical Switches for Containers Networking (this opens a scenario of easy integration between Containers and “classic” VMs leveraging the NSX Logical Router, for example to Connect an Application Server to a DB Server);
    • A Logical Switch for the Bridge Network;
    • A Logical Switch for the External (Containers) Network;
  • Leverage NSX DFW for Micro-Segmentation between Containers;
  • Leverage NSX Edge Gateway to protect Containers instantiated to an External Network and public facing. The Edge Gateway provides the following Services:
    • Firewall for North/South traffic;
    • NAT;
    • DHCP.

In my scenario, I have the Public Network accessed internally by Developers and the External Network accessed by Consumers.
The Public Network is not protected by an Edge Gateway and leverages the native Docker networking: Containers attached to the Bridge Network are NAT’d to the Public Network by the Virtual Container Host Endpoint VM.
The External Network, where Containers can be directly attached bypassing the VCH network stack, is protected by an Edge Gateway.

Before starting the installation, I’ve created the required PortGroups on my Distributed Switch, shown in the following screenshot.
You can see two “standard” PortGroups backed by VLANs, Docker-Public and External-Consumer, and two PortGroups corresponding to two NSX Logical Switches backed by VXLANs. As of version 1.1.1 there’s not yet native integration with NSX, you need to use the vSphere PortGroup name instead of the NSX native Logical Switch name to instruct VIC to use Logical Switches.

You start the installation of VIC by deploying a Virtual Appliance, provided in the OVA format.
You can see that I’ve created two Resource Pools in my Cluster, the first used for Management workloads, the second used to Host Containers. With this configuration I’m showing that Container VMs and “standard” vSphere VMs can coexist, this is a totally supported configuration. You can leverage different Resource Pools to create different Virtual Container Host. Each VCH can then be managed by different develeper teams with specific Resources, Projects, Registries, Users.

I’ve deployed the VIC vApp, in my case named VIC 1.1.1, in my Management Resource Pool because the vApp is used to manage the overall VIC installation.
The vApp is based on VMware Photon OS and provides the VIC Engine, the Container Management Portal (aka Admiral), the Registry Management Portal (aka Harbor).
From the VIC Management Appliance command line, I’ve used the vic-machine-Linux command with the create parameter to create a Linux VCH.
The command used to create a VCH accepts all the parameters to configure the Bridge, Public, Management, Client and Container Networks to be used for this specific VCH. The Bridge Network must be dedicated to this specific VCH.

After all the checks the deployment and configuration of the new VCH is automatically made base on the provided command line parameters.
I’ve chosen to attach my VCH endpoint to the Public Network with the IP address 192.168.110.180/24.
The Bridge Interface IP address is automatically assigned and defaults to 172.16.0.1/16. An internal IPAM manage the IP address assignment to all Containers connected to the Bridge Network, with a DHCP Scope on the network 172.16.0.0/16.
There’s a check made during the VCH installation process regarding needed ports to be open for communication between the VCH and the ESXi Hosts in the Cluster. You can use the “vic-machine-<OS> update” command to open the required ports on all ESXi Hosts. See here for details instructions: https://vmware.github.io/vic-product/assets/files/html/1.1/vic_vsphere_admin/open_ports_on_hosts.html
The output of the installation process provides all the information you need to interact with the VCH: Admin Portal URL, published ports and the value to be set as environment variables to manage your VCH with the Docker command line.
You must run the export command with the provided information, then I the validity of the environment variables can be checked with the “docker info” command.

You can use the command “docker network ls” to list the available networks for Containers.

In the case you need to delete a specific VCH, you can use the command “vic-machine-<OS> delete” with the appropriate parameters.

After the creation of the first VCH, it can be added to the VIC Management Portal. From the Portal home page, choose “ADD A HOST“.

You need to provide the URL to reach the VCH, the Host type choosing between VCH (VIC based) and Docker (standard Docker), and the credentials to connect to the Host.

After the parameters validation, you can choose “ADD” to add the VCH to the Management Portal.

After you add the VCH to the Admiral Portal, you can see it in the Management/Hosts section. An overview of the VCH status is provided in the dashboard.

With at least one VCH created, you can start to provision Containers. Enter the Containers section in the Portal and choose “CREATE CONTAINER“.

A default Registry is available, pointing to the public Docker Hub which hosts standard Docker images available to be fetched and deployed. From the available images, my choice is to deploy a Nginx Web Server.

In the Network section of the provisioning configuration, my first scenario uses Bridge as the choice for Network Mode, configuring a binding from Port 80 (http) of the VCH Endpoint to Port 80 of the Nginx Web Server. With this configuration, developers will be able to point to the VCH Endpoint IP address 192.168.110.180 to reach the Nginx Web Server. The VCH will automatically provide the needed NAT to the Container IP on the Bridge Network.

Container provisioning can be started with the “PROVISION” button.
On the right side of the Management Portal you can open the “REQUESTS” tab to look at the progress of the deployment process.

The “FINISHED” message informs you about the Container creation completed.

The newly created Container is now available in the Container section of the Portal, with connection details provided in the dashboard.

You have the capability to manage the Container with four specific buttons: Details, Stop, Remove, Scale.

Entering the details of the Container, you have all the details about CPU usage, Memory usage and the Properties.
In the Network Address row, you can see that Bridge in the choosen Network Mode and 172.16.0.2 is the IP address automatically assigned to the Container VM.

Accessing the address of the VCH via http on Port 80 on the Public Network shows that the configuration is correct, the Nginx Home Page is shown as expected.

I want now to deploy a second Nginx Container, this time attached to the Container Network instead of the Bridge Network.

I could do this using the Command Line, but I prefer to follow the UI way accessing the list of available templates, choosing the official Nginx and selecting the arrow on the “PROVISION” button to access the “Enter additional info” section.

From here, I choose to save the Nginx template to create a customized version that can subsequently be automatically deployed on the Container Network.

You can edit the new template using the “Edit” button. This brings you to the “Edit Container Definition” page.

Once in the “Edit Container Definition”, you must enter the “Network” section. In this section you have the chance to add specific Networks that can be leveraged to directly attach Containers to them, bypassing the VCH network stack. You can add a new network by choosing “Add Network” in the “Network” parameter.

Here you can choose an existing network, in my case the NSX Logical Switch I named as Container Network.

I change the template name to make it unique and I save it.

Back in the “Edit Template” section you can graphically see that Containers instantiated from this template will be attached to the VIC-Container-Network NSX Logical Switch.

In the Templates view in the default Registry you can now find the customized Nginx and provision a new Container from it using the “PROVISION” button.

At the end of the provisioning process, the new Nginx Container can be found in the Containers section beside the previously deployed Nginx. The difference between the two Containers is that the first has a standard Bridge Network connection, the second is attached to an external Network, as highlighted in the following screenshot.

Looking at the vSphere Web Client, you can see three VMs deployed in the vch1 vApp (the Virtual Container Host): vch1 is the Container Endpoint, the other two VMs are the two deployed Containers with Nginx.
The highlighted Custom_nginx-mcm430… is the Container VM attached to the NSX Logical Switch VIC-Container-Network. The IP address 10.10.10.10 has been assigned by the Edge Gateway providing the DHCP Service for the Container Network.

Based on the expected Architecture shown at the beginning of the article, I’ve already configured the Edge Gateway with the appropriate NAT and Firewall configuration to publish Services delivered by Containers.
The External, consumer facing interface of the Edge Gateway is configured with the 192.168.140.152 IP Address and has a DNAT configured to expose the Nginx Web Server deployed on the Container Network.
Accessing http://192.168.140.152 correctly expose the Nginx Home Page.

 

Some useful commands you may need to use:
vic-machine-<OS> ls” list all the VMs deployed in the VCH, giving you the VM ID you need as input for additional commands.

An example of command that needs the Virtual Machine ID is the “vic-machine-OS debug”.
This command can be used to enable ssh access to a Container VM and to set the root password on it.

 

Edge Gateway Configuration.

A simple DNAT configuration. The first rule provides DNAT for the first created Custom_Nginx (the one attached to the Container Network).
The second role is pre-provisioned for the next Container I’ll deploy, with a default configuration on a different Port (8080) of the same Edge Gateway external IP address that DNAT to the next allocated internal IP address on Port 80 on the Container Network.

I’ve deployed a second Custom Nginx on the Container Network, reachable pointing to Port 8080 of the Edge Gateway as per DNAT configuration.

Containers Micro-Segmentation: