VMware Integrated Containers Networking with NSX

Recently I had the chance to work on a PoC on VMware Integrated Containers (VIC).
VIC enables you to work with Docker Containers leveraging the full feature set of vSphere (HA, DRS, vMotion etc).
The logic used in VIC is to map every single Container to a micro-VM. Having a single Container in a VM provides the capability to leverage NSX Micro-Segmentation to secure Applications, and to leverage all the NSX fetures like Edge Gateways, Logical Routers, Logical Switches.
The official VIC documentation can be found at the following URL: https://vmware.github.io/vic-product/assets/file/html/1.1/§
In the official documentation you can find an excellent starting point to understand the VIC logic and mapping to Docker constructs.

In VIC, Containers are created into a Virtual Container Host (VCH), that maps to a vSphere vApp.
The VCH vApp contains an Endpoint VM that provides the Management and Networking functions.
All the Container micro-VMs are instantiated in the scope of a VCH vApp.

This is the Networking logic in VIC. As of version 1.1, each VCH can have a maximum of 3 Network Interfaces.

Public Network: The network that container VMs use to connect to the internet. Ports that containers expose with docker create -p when connected to the default bridge network are made available on the public interface of the VCH endpoint VM via network address translation (NAT), so that containers can publish network services.

Bridge Network: The network or networks that container VMs use to communicate with each other. Each VCH requires a unique bridge network. The bridge network is a port group on a distributed virtual switch.

Container Network: Container networks allow the vSphere administrator to make vSphere networks directly available to containers. This is done during deployment of a VCH by providing a mapping of the vSphere network name to an alias that is used inside the VCH endpoint VM. You can share one network alias between multiple containers.

For the scope of this PoC, I’ve designed the following Architecture:

The main goals I want to achieve are the following:

  • Leverage NSX Logical Switches for Containers Networking (this opens a scenario of easy integration between Containers and “classic” VMs leveraging the NSX Logical Router, for example to Connect an Application Server to a DB Server);
    • A Logical Switch for the Bridge Network;
    • A Logical Switch for the External (Containers) Network;
  • Leverage NSX DFW for Micro-Segmentation between Containers;
  • Leverage NSX Edge Gateway to protect Containers instantiated to an External Network and public facing. The Edge Gateway provides the following Services:
    • Firewall for North/South traffic;
    • NAT;
    • DHCP.

In my scenario, I have the Public Network accessed internally by Developers and the External Network accessed by Consumers.
The Public Network is not protected by an Edge Gateway and leverages the native Docker networking: Containers attached to the Bridge Network are NAT’d to the Public Network by the Virtual Container Host Endpoint VM.
The External Network, where Containers can be directly attached bypassing the VCH network stack, is protected by an Edge Gateway.

Before starting the installation, I’ve created the required PortGroups on my Distributed Switch, shown in the following screenshot.
You can see two “standard” PortGroups backed by VLANs, Docker-Public and External-Consumer, and two PortGroups corresponding to two NSX Logical Switches backed by VXLANs. As of version 1.1.1 there’s not yet native integration with NSX, you need to use the vSphere PortGroup name instead of the NSX native Logical Switch name to instruct VIC to use Logical Switches.

You start the installation of VIC by deploying a Virtual Appliance, provided in the OVA format.
You can see that I’ve created two Resource Pools in my Cluster, the first used for Management workloads, the second used to Host Containers. With this configuration I’m showing that Container VMs and “standard” vSphere VMs can coexist, this is a totally supported configuration. You can leverage different Resource Pools to create different Virtual Container Host. Each VCH can then be managed by different develeper teams with specific Resources, Projects, Registries, Users.

I’ve deployed the VIC vApp, in my case named VIC 1.1.1, in my Management Resource Pool because the vApp is used to manage the overall VIC installation.
The vApp is based on VMware Photon OS and provides the VIC Engine, the Container Management Portal (aka Admiral), the Registry Management Portal (aka Harbor).
From the VIC Management Appliance command line, I’ve used the vic-machine-Linux command with the create parameter to create a Linux VCH.
The command used to create a VCH accepts all the parameters to configure the Bridge, Public, Management, Client and Container Networks to be used for this specific VCH. The Bridge Network must be dedicated to this specific VCH.

After all the checks the deployment and configuration of the new VCH is automatically made base on the provided command line parameters.
I’ve chosen to attach my VCH endpoint to the Public Network with the IP address 192.168.110.180/24.
The Bridge Interface IP address is automatically assigned and defaults to 172.16.0.1/16. An internal IPAM manage the IP address assignment to all Containers connected to the Bridge Network, with a DHCP Scope on the network 172.16.0.0/16.
There’s a check made during the VCH installation process regarding needed ports to be open for communication between the VCH and the ESXi Hosts in the Cluster. You can use the “vic-machine-<OS> update” command to open the required ports on all ESXi Hosts. See here for details instructions: https://vmware.github.io/vic-product/assets/files/html/1.1/vic_vsphere_admin/open_ports_on_hosts.html
The output of the installation process provides all the information you need to interact with the VCH: Admin Portal URL, published ports and the value to be set as environment variables to manage your VCH with the Docker command line.
You must run the export command with the provided information, then I the validity of the environment variables can be checked with the “docker info” command.

You can use the command “docker network ls” to list the available networks for Containers.

In the case you need to delete a specific VCH, you can use the command “vic-machine-<OS> delete” with the appropriate parameters.

After the creation of the first VCH, it can be added to the VIC Management Portal. From the Portal home page, choose “ADD A HOST“.

You need to provide the URL to reach the VCH, the Host type choosing between VCH (VIC based) and Docker (standard Docker), and the credentials to connect to the Host.

After the parameters validation, you can choose “ADD” to add the VCH to the Management Portal.

After you add the VCH to the Admiral Portal, you can see it in the Management/Hosts section. An overview of the VCH status is provided in the dashboard.

With at least one VCH created, you can start to provision Containers. Enter the Containers section in the Portal and choose “CREATE CONTAINER“.

A default Registry is available, pointing to the public Docker Hub which hosts standard Docker images available to be fetched and deployed. From the available images, my choice is to deploy a Nginx Web Server.

In the Network section of the provisioning configuration, my first scenario uses Bridge as the choice for Network Mode, configuring a binding from Port 80 (http) of the VCH Endpoint to Port 80 of the Nginx Web Server. With this configuration, developers will be able to point to the VCH Endpoint IP address 192.168.110.180 to reach the Nginx Web Server. The VCH will automatically provide the needed NAT to the Container IP on the Bridge Network.

Container provisioning can be started with the “PROVISION” button.
On the right side of the Management Portal you can open the “REQUESTS” tab to look at the progress of the deployment process.

The “FINISHED” message informs you about the Container creation completed.

The newly created Container is now available in the Container section of the Portal, with connection details provided in the dashboard.

You have the capability to manage the Container with four specific buttons: Details, Stop, Remove, Scale.

Entering the details of the Container, you have all the details about CPU usage, Memory usage and the Properties.
In the Network Address row, you can see that Bridge in the choosen Network Mode and 172.16.0.2 is the IP address automatically assigned to the Container VM.

Accessing the address of the VCH via http on Port 80 on the Public Network shows that the configuration is correct, the Nginx Home Page is shown as expected.

I want now to deploy a second Nginx Container, this time attached to the Container Network instead of the Bridge Network.

I could do this using the Command Line, but I prefer to follow the UI way accessing the list of available templates, choosing the official Nginx and selecting the arrow on the “PROVISION” button to access the “Enter additional info” section.

From here, I choose to save the Nginx template to create a customized version that can subsequently be automatically deployed on the Container Network.

You can edit the new template using the “Edit” button. This brings you to the “Edit Container Definition” page.

Once in the “Edit Container Definition”, you must enter the “Network” section. In this section you have the chance to add specific Networks that can be leveraged to directly attach Containers to them, bypassing the VCH network stack. You can add a new network by choosing “Add Network” in the “Network” parameter.

Here you can choose an existing network, in my case the NSX Logical Switch I named as Container Network.

I change the template name to make it unique and I save it.

Back in the “Edit Template” section you can graphically see that Containers instantiated from this template will be attached to the VIC-Container-Network NSX Logical Switch.

In the Templates view in the default Registry you can now find the customized Nginx and provision a new Container from it using the “PROVISION” button.

At the end of the provisioning process, the new Nginx Container can be found in the Containers section beside the previously deployed Nginx. The difference between the two Containers is that the first has a standard Bridge Network connection, the second is attached to an external Network, as highlighted in the following screenshot.

Looking at the vSphere Web Client, you can see three VMs deployed in the vch1 vApp (the Virtual Container Host): vch1 is the Container Endpoint, the other two VMs are the two deployed Containers with Nginx.
The highlighted Custom_nginx-mcm430… is the Container VM attached to the NSX Logical Switch VIC-Container-Network. The IP address 10.10.10.10 has been assigned by the Edge Gateway providing the DHCP Service for the Container Network.

Based on the expected Architecture shown at the beginning of the article, I’ve already configured the Edge Gateway with the appropriate NAT and Firewall configuration to publish Services delivered by Containers.
The External, consumer facing interface of the Edge Gateway is configured with the 192.168.140.152 IP Address and has a DNAT configured to expose the Nginx Web Server deployed on the Container Network.
Accessing http://192.168.140.152 correctly expose the Nginx Home Page.

 

Some useful commands you may need to use:
vic-machine-<OS> ls” list all the VMs deployed in the VCH, giving you the VM ID you need as input for additional commands.

An example of command that needs the Virtual Machine ID is the “vic-machine-OS debug”.
This command can be used to enable ssh access to a Container VM and to set the root password on it.

 

Edge Gateway Configuration.

A simple DNAT configuration. The first rule provides DNAT for the first created Custom_Nginx (the one attached to the Container Network).
The second role is pre-provisioned for the next Container I’ll deploy, with a default configuration on a different Port (8080) of the same Edge Gateway external IP address that DNAT to the next allocated internal IP address on Port 80 on the Container Network.

I’ve deployed a second Custom Nginx on the Container Network, reachable pointing to Port 8080 of the Edge Gateway as per DNAT configuration.

Containers Micro-Segmentation: