Tanzu Kubernetes Grid on VMware Cloud on AWS

A few days ago I had the opportunity to prepare a Tanzu Kubernetes Grid environment on VMware Cloud on AWS for a customer demo, and I thought it would have made sense to make a blog post on this.

TKG deployment is pretty straightforward, but to give deployment time a further boost I decided to leverage the Demo Appliance for Tanzu Kubernetes Grid. The Demo Appliance for TKG has been released a few days ago as a VMware Fling.

Kudos to William Lam for the release of this Fling, a really powerful way to speed up the installation of TKG and go straight to showcase its power and flexibility.

For the scope of the demo I had to deliver, I decided to follow William’s demo script, provided with the Fling’s instructions, as I believe it really fits for my demo purpose.

Tanzu Kubernetes Grid Overview

VMware Tanzu Kubernetes Grid is an enterprise-ready Kubernetes runtime that packages open source technologies and automation tooling to help you get up and running quickly with a scalable, multi-cluster Kubernetes environment.

In VMware Cloud on AWS, the choice has been made to support the “Tanzu Kubernetes Grid Plus” offering. The “Plus” is an addon to a traditional TGK installation which means the customer will get support for several additional open-source products, together with support from a specialized team of Customer Reliability Engineers (CRE) dedicated to support TKG+ customers. In this knowledge base article you can find a detailed comparison between TKG and TKG+.

For further details on the solution, you can browse to the official Tanzu Kubernetes Grid page, and read the VMware Tanzu Kubernetes Grid Plus on VMware Cloud on AWS solution brief.

To setup TKG, we are leveraging a bootstrap environment that can live on your laptop, in a VM, in a Server etc. and it’s based on kind (Kubernetes in Docker). Basically, with kind we are leveraging Kubernetes to deploy our TGK Management Cluster. Using the Demo Appliance for Tanzu Kubernetes Grid, we have all the components required to setup the bootstrap environment ready to be used, and we can simply focus on the TGK deployment itself. I really hope to see the Demo Appliance being productized and as such becoming supported to deploy production environments.

Once the TKG Management Cluster is in place, we will leverage Cluster API through the TKG Command Line Interface to manage the creation and lifecycle of all our TKG Workload Cluster where our applications will live. I believe the following picture could be well descriptive of the concept: we leverage Kubernetes to create a Management Cluster, and we’ll manage all our Workload Clusters from the Management Cluster. (Thanks again to William for the nice diagram).

TKG on VMC conceptual architecture
TKG on VMC conceptual architecture

For the scope of this article, I’ve already satisfied the little amount of networking and security prerequisites you need to implement in your VMC SDDC before deploying TGK.

Now, let’s go straight to the steps needed to implement TKG on VMC and see it in action: from 0 to Kubernetes on VMC in less than 30 minutes!

Setup Content Library to sync all TKG Virtual Appliance templates

Open the SDDC’s vCenter and from the “Home” screen, select “Content Libraries”.

vSphere Client - Content Library
vSphere Client – Content Library

Click on the “+” button to start the “New Content Library” wizard. In the “Name and location” window, we can enter the name of the new Content Library and click “NEXT”.

New Content Library
New Content Library

In the “Configure content library” window, check the “Subscribed content library” radio button, input the following subscription URL: “https://download3.vmware.com/software/vmw-tools/tkg-demo-appliance/cl/lib.json“, check the option to download content immediately, then click “NEXT”.

Subscribe to Content URL
Subscribe to Content URL

Accept the authenticity of the subscription Host clicking “YES”.

Trust the subscription Host
Trust the subscription Host

In the “Add storage” window, select the “WorkloadDatastore”, then click “NEXT”.

Content Library - Storage
Content Library – Storage

In the “Ready to complete” window, review the settings then click “FINISH”.

New Content Library - review settings
New Content Library – review settings

The new Content Library is now created, select it and move to the “OVF & OVA Templates” tab, where we can wait for our templates to be downloaded.

TGK Demo Content Library
TGK Demo Content Library

Once the download is completed, right-click on the “TKG-Demo-Appliance_1.0.0.ova” and select “New VM from This Template”. This will start the new VM creation wizard.

Deploy the Demo Appliance for Tanzu Kubernetes Grid

New VM from template
New VM from template

Select a name for the new Virtual Machine, select a location (VM Folder) then click “NEXT”.

New VM - Name and Folder
New VM – Name and Folder

Select the destination Resource Pool (by default, Compute-Resource-Pool in VMC) and click “NEXT”.

Select Compute Resource
Select Compute Resource

Review the details and, if everything is fine, click on “NEXT”.

New VM - Review Details
New VM – Review Details

Select “WorkloadDatastore” as the target storage, optionally choose a specific Storage Policy to apply to the new VM, then click “NEXT”.

New VM - Select Storage
New VM – Select Storage

Select the destination network for the new VM, then click “NEXT”.

New VM - Select Network
New VM – Select Network

In the “Customize template” window, configure the new VM IP address, Gateway, DNS, NTP, password, optional proxy Server, then click “NEXT”.

New VM - Customize template
New VM – Customize template

Review all the settings, then click “FINISH” to start the TKG Demo Appliance VM creation.

New VM - Ready to complete
New VM – Ready to complete

Before moving to the Management Cluster setup, we need to have in place two Virtual Machine Templates that will be used to deploy the environment. Back to the same Content Library we created before, where we have our TKG Demo Appliance template, we must deploy two new VMs by using the templates “photon-3-capv-haproxy-v0.6.3_vmware.1” and “photon-3-v1.17.3_vmware.2” and, once the two VMs are created, convert them to vSphere Templates, as we’ll need these in the next steps. Once done, we are ready to proceed.

TKG Management Cluster Setup

We can locate our newly created Demo Appliance for TKG in the vCenter Inventory, together with all the others VMs and Templates. We need to take note of the IP address or DNS name assigned to the TKG Demo Appliance as we’ll access it with SSH and we’ll configure everything we need by using both the tkg and kubectl CLI.

Demo Appliance for TKG in vCenter
Demo Appliance for TKG in vCenter

Now that we have the IP address or the DNS name of the Appliance, let’s connect to it via SSH. Once connected, we can start the setup of our TKG Management Cluster typing the command “tkg init –ui” followed by Enter. This will start the TKG Management Cluster setup. As the Demo Appliance doesn’t have a UI, we’ll need to open a second session to it, setting up SSH port redirection to being able to use our local browser to access the TKG Installer Web UI.

SSH connection to Demo Appliance for TKG
SSH connection to Demo Appliance for TKG

Once port redirection is in place, we can access the TKG Installer wizard by navigating to “http://localhost:8080” on our local machine. Here, we can select the “DEPLOY ON VSPHERE” option.

TKG Installer – Deploy on vSphere

The first step is to input the vCenter Server IP or DNS name, and credentials, then click “CONNECT”.

TKG Installer - Connect to vCenter
TKG Installer – Connect to vCenter

We can safely ignore the notification about vSphere 7.0.0 and click “PROCEED”. The reason we are getting this information message is that VMware Cloud on AWS, following a different release cycle than the commercial edition of vSphere, already have vSphere 7 deployed in its SDDCs.

TKG Installer - vSphere 7 notification
TKG Installer – vSphere 7 notification

The second step is to select the vSphere Datacenter inventory item where we want to deploy our TKG Management Cluster. I’m not providing a real SSH Public Key in my example, but we should provide a “real” key if we want to easily connect to any Kubernetes node once deployed. Click “NEXT”.

TKG Installer - Iaas Provider
TKG Installer – Iaas Provider

In the “Control Plane Settings”, we can choose between several instance types and sizes based on our requirements. For the goal of this demo, it’s safe to pick the “Development” (single control plane node) flavour and the “medium” instance type. Input a name for the Management Cluster and select the API Server Load Balancer template. This will be the template we’ve created before starting from the “photon-3-capv-haproxy-v0.6.3_vmware.1” image. Click “NEXT” when done.

TKG Installer - Control Plane Settings
TKG Installer – Control Plane Settings

In the next step we must select the Resource Pool, VM Folder and Datastore where the Management Cluster will be created. Then click “NEXT”.

TKG Installer - Resources
TKG Installer – Resources

Select the SDDC network that will host our TKG VMs, leave the default CIDR selected for the Cluster Service CIDR and the Cluster Pod CIDR if you don’t have any specific requirement to change the default values. Click “NEXT” when done.

TKG Installer - Kubernetes Network
TKG Installer – Kubernetes Network

Select the vSphere Template configured with the required Kubernetes version. This is the template we’ve created before starting from the “photon-3-v1.17.3_vmware.2” image we’ve imported in our Content Library. Click “NEXT” when done.

TKG Installer - OS Image
TKG Installer – OS Image

With all the required fields compiled (green check mark), we can click on “REVIEW CONFIGURATION”.

TKG Installer - Review Configuration
TKG Installer – Review Configuration

We can then click on “DEPLOY MANAGEMENT CLUSTER” to start the deployment of our TKG Management Cluster. This will approximately take 6 to 8 minutes.

TKG Installer - Start Deployment
TKG Installer – Start Deployment

After all the configuration steps are completed, we will get an “Installation complete” confirmation message. Our TKG Management Cluster is now ready to be accessed. It’s safe to close the browser window and move back to the Demo Appliance for TGK SSH session we have previously opened.

TKG - Installation Complete
TKG – Installation Complete

Looking at the vCenter Inventory, we can easily see the Management Cluster VMs we’ve just deployed.

vCenter Inventory – TKG Management Cluster

Back in the SSH session, we automatically found ourselves in the context of the TKG Management Cluster. Here, leveraging the TKG Command Line Interface, we can create and manage the lifecycle of our Workload Clusters. Let’s create our first Workload Cluster using the command “tgk create cluster –plan=<dev_or_prod> <cluster-name>“. In my example I’m using the “dev” template and a cluster name of “it-tkg-wlc-01“. You can see in the low right end of the screenshot the VMs being deployed with the chosen name.

TKG - Create Workload Cluster
TKG – Create Workload Cluster

Once our Kubernetes Workload Cluster is created, we automatically find ourselves in the context of the new Cluster and we can immediately start deploying our applications.

TKG - Workload Cluster Created
TKG – Workload Cluster Created

I would like to leverage the YELB application for this Kubernetes demo, to deploy the application I’m starting by cloning the YELB git repository to my local machine.
Then, from the directory where we cloned the git repository (in my case “~/demo/yelb“), I create a new namespace with the command “kubectl create namespace yelb” followed by the resource creation with the command “kubectl apply -f yelb.yaml“.

YELB application resources deployed
YELB application resources deployed

As we have chosen to deploy a Kubernetes Cluster with only a single Control Plane and a single Node, it’s easy to get the IP address needed to access our application as the Control Plane doesn’t host running pods. With the command “kubectl get nodes -o wide” we can obtain the external IP address of our single node, which for sure is hosting the running yelb pods.

Get node external IP address
Get node external IP address

Once we have the external IP address of the node hosting the application, we can point our browser to that IP on port 30001, and here we can see our application is working and ready for the demo.

The YELB application
The YELB application

This concludes this post, we’ve seen how we can quickly deploy Tanzu Kubernetes Grid on top of our VMware Cloud on AWS Infrastructure.
This enables us to have a very powerful solution where Virtual Machines, Containers (orchestrated by Kubernetes) and native AWS services can coexist and be very well integrated to design your application modernization strategy.

In one of the next blog posts, I’ll show you how we can leverage such an integrated solution to quickly lift and shift an application, and subsequently modernize it.

Stay tuned! #ESVR

AWS Volume Gateway integration with VMware Cloud on AWS

Cloud-based iSCSI block storage volumes for Workloads in VMware Cloud on AWS

In this follow-up article I’d like to show you again how the native integration between AWS Services and VMware Cloud on AWS can provide you a lot of powerful capabilities.

We can leverage the AWS Storage Gateway – Volume Gateway to provide our workloads with Cloud-based iSCSI block storage volumes. This enables us to re-think our approach to the Cloud when it comes to migrate storage and store backups.

The most common use cases for the Volume Gateway are: Hybrid File Services, Backup to Cloud, DR to Cloud, Application migration.

Architecture and Service Description

In the following picture you can see the Architecture of the solution we are about to implement.
AWS Storage Gateway is a Virtual Appliance that exposes iSCSI block volumes to VMware workloads. It has been historically deployed on-premises, but now that we have VMware Cloud on AWS, we can take advantage of the high speed and low latency connection provided by the ENI that connects our SDDC with all the native AWS services in the Connected VPC.
AWS Volume Gateway comes in two modes: stored and cached. In stored mode, the entire data volume is available locally and asynchronously copied to the cloud. In cached mode, the entire data volume is stored in the cloud and frequently accessed portions of the data are cached locally by the Volume Gateway appliance.
By “stored in the Cloud” in this context I mean S3. Volume Gateway is a Managed Platform Service that is using S3 in the backend, even if S3 is completely abstracted from the customer. For this reason, we’ll not be able to see which bucket is in use by the Volume Gateway nor to manipulate in any way the S3 objects.
If you’re looking for a solution that maps your files 1:1 with S3 objects, check out my previous blog post about the File Gateway.

Basically, with the Volume Gateway we are providing iSCSI block storage volumes backed by S3 to our workloads hosted in VMware Cloud on AWS.

This is the High Level Architecture of the solution we are implementing:

AWS Volume Gateway with VMware Cloud on AWS integration
AWS Volume Gateway with VMware Cloud on AWS integration

From a performance perspective, AWS recommends the following for its Storage Gateway appliances: https://docs.aws.amazon.com/storagegateway/latest/userguide/Performance.html#performance-fgw

From an high availability perspective, we can leverage vSphere HA to provide high availability to the Volume Gateway. You can read more about this feature here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Performance.html#vmware-ha
We’ll test vSphere HA with Volume Gateway later, during the deployment wizard.

Preliminary Steps

Get the VPC Subnet and Availability Zone where the SDDC is deployed

We need to accomplish some preliminary steps to gather some information about our SDDC, that we’ll need later. In addition, we need to configure some Firewall Rules to enable communication between our SDDC and the Connected VPC where we’ll configure our Gateway Endpoint.

As a first step, we need to access our VMware Cloud Services console and access VMware Cloud on AWS.

VMware Cloud Services
VMware Cloud Services

The second step is to access our SDDC clicking on “View Details”. Alternatively, you can click on the SDDC name.

VMware Cloud on AWS SDDC
VMware Cloud on AWS SDDC

Once in our SDDC, we need to select the “Networking & Security” tab.

SDDC details
SDDC details

In the “Networking & Security” tab, we must head to the “Connected VPC” section, where we can find the VCP subnet and AZ that we did choose upon deployment of the SDDC. Our SDDC resides there, therefore every AWS service we will configure in this same AZ will not cause us any traffic charge. We need to keep note of the VPC subnet and AZ as we’ll need this information later.

SDDC Networking & Security
SDDC Networking & Security

Create SDDC Firewall Rules

The second preliminary step we need to perform is to enable bi-directional communication between our SDDC and the Connected VPC through the Compute Gateway (CGW). I’ll not go through the details of the Firewall Rules creation in this post, but simply highlight the result: for the sake of simplicity, in this example we have a rule allowing any kind of traffic from the Connected VPC Prefixes and S3 Prefixes to any destination, and vice-versa. As you can see, both rules are applied to the VPC Interface which actually is the cross-Account ENI connecting the SDDC to the Connected VPC.
If we would like to configure more granular security, we could do this leveraging the information highlighted in the AWS documentation here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Resource_Ports.html

Compute Gateway Firewall Rules
Compute Gateway Firewall Rules

Let’s now have a look at the actual implementation of the Volume Gateway in VMC and how it works.

Create the Storage Gateway VPC Endpoint

First, we need to access the AWS Management Console for the AWS Account linked to the VMware Cloud on AWS SDDC and select “Storage Gateway” from the AWS Services (hint: start typing in the “Find Services” field and the relevant services will be filtered for you). Make sure you are connecting to the right Region where your SDDC and Connected VPC are deployed.

AWS Management Console
AWS Management Console

If you don’t have any Storage Gateway already deployed, You will be presented with the Get Started page. Click on “Get Started” to create your Storage Gateway. (hint: if you already have one or more Storage Gateways deployed, simply click on “Create Gateway” in the landing page for the service).

AWS Storage Gateway - Getting Started Page
AWS Storage Gateway – Getting Started Page

You will be presented with the Create Gateway wizard. The first step is to choose the Gateway type. In this scenario, we are focusing on iSCSI block volumes and we will select “Volume Gateway”. We’ll additionally select “Cached Volumes” to benefit from low-latency local access to our most frequently accessed data, and then click “Next”.

Volume Gateway - Cached
Volume Gateway – Cached

The next step is to download the OVA image to be installed on our vSphere Environment in VMC. Click on “Download Image”, then click “Next”.

Download Storage Gateway Image for ESXi
Download Storage Gateway Image for ESXi

Deploy the Storage Gateway Virtual Appliance in VMware Cloud on AWS

Now that we have download the ESXi image, we’ll momentarily leave the AWS Console and move to our vSphere Client, to install the Storage Gateway Virtual Appliance. I’m assuming here that we have the VMware Cloud on AWS SDDC already deployed and we have access to our vCenter in the Cloud. SDDC deployment is covered in detail in one of my previous posts here.
Head to the Inventory Object where you want to deploy the Virtual Appliance (e.g. Compute-ResourcePool), right click and select “Deploy OVF Template…”

Deploy OVF Template
Deploy OVF Template

Select the previously downloaded Virtual Appliance. This is named “aws-storage-gateway-latest.ova” at the time of this writing. Click “Next”.

Choose Transit Gateway OVA
Choose Transit Gateway OVA

Provide a name for the new Virtual Machine, then click “Next”.

Provide Virtual Machine Name
Provide Virtual Machine Name

Confirm the Compute Resource where you want to deploy the Virtual Appliance (e.g. Compute-ResourcePool). Then, click “Next”.

Select Compute Resource
Select Compute Resource

In the “Review details” page, click “Next”.

Deploy OVF Template - Review details
Deploy OVF Template – Review details

Select the Storage that will host our Virtual Appliance. In VMware Cloud on AWS this will be “WorkloadDatastore”. Click “Next”.

Workload Datastore
Workload Datastore

Select the destination network for the Virtual Appliance and click “Next”.

Destination Network
Destination Network

In the “Ready to Complete” window, click “Finish” to start the creation of the Storage Gateway Virtual Appliance.

Ready to complete
Ready to complete

We now have our Storage Gateway Appliance in the SDDC’s vCenter inventory. Let’s edit the VM to add some storage to be used for caching. To clarify, in addition to the 80 GB base VMDK, the Storage Gateway Appliance must have at least two additional VMDKs of at least 150 GB in size each, one to be used for caching and another one to be used as an upload buffer. You can see all the Storage Gateway requirements here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.html
Select the Volume Gateway VM, select “ACTIONS” then “Edit Settings…”.

Storage Gateway Virtual Appliance - Edit Settings
Storage Gateway Virtual Appliance – Edit Settings

In the “Edit Settings…” window, under Virtual Hardware, add two new hard disk devices by clicking on “ADD NEW DEVICE” and selecting “Hard Disk”.

Add Hard Disks to Volume Gateway
Add Hard Disks to Volume Gateway

Select a size of at least 150 GB for each of the new disks. Then click “OK”.

Set new Hard Disk size
Set new Hard Disk size

Create VPC Endpoint for Storage Gateway

We can now switch back to the AWS Console, where we should be in the “Service Endpoint” page of the Storage Gateway deployment wizard. In case we’re still in the “Select Platform” window, we can simply click “Next”. As we want to have a private, direct connection between the Storage Gateway vApp and the Storage Gateway Endpoint, we will select “VPC” as our Endpoint Type. Click on the “Create a VPC endpoint” button to open a new window where we can create our endpoint.
A VPC Endpoint is a direct private connection from a VPC to a native AWS Service. With a VPC Endpoint in place, we don’t need an Internet Gateway, NAT Gateway or VPN to access AWS Services from inside our VPC, and instances in the VPC do not require public IP addresses.
A VPC Endpoint for Storage Gateway is based on the PrivateLink networking feature and it is an Interface-based (ENI) Endpoint.
If you have already created a Storage Gateway Endpoint based on my previous blog post on File Gateway integration with VMware Cloud on AWS, you can skip the next steps and input directly the VPC endpoint IP address or DNS name in the “VPC endpoint” field.

Service Endpoint
Service Endpoint

In the “Create Endpoint” wizard, we have a couple of choices we must make for our Storage Gateway Endpoint: Service category will be “AWS Services”, then we’ll select the same AZ and subnet where our SDDC is deployed (note: we could select more than one AZ and subnet for better resilience of the endpoint, but we would potentially incur in cross-AZ charges and it could make no sense to have cross-AZ resiliency of the Volume Gateway, unless we also deploy our SDDC in a Stretched Cluster configuration between two AZs). Lastly, we can leave the default security group selected and click on “Create endpoint”.

Create Storage Gateway Endpoint
Create Storage Gateway Endpoint

Once the deployment is finished, we’ll be able to see our VPC Endpoint available in the AWS Console. You can see here that the Endpoint type is “Interface”.

VPC Endpoint in the AWS Console
VPC Endpoint in the AWS Console

We can now switch back to the Volume Gateway creation wizard, but before that we must take note of the IP address assigned to our Storage Endpoint. We could use either the DNS name or the IP address to configure our Storage Gateway, I’m choosing to use the IP address in this example, let’s see where we can find the IP address assigned to the ENI (Storage Endpoint). This is visible in the “Subnets” tab, where one ENI is created for each Subnet the VPC Endpoint is attached to.

VPC Endpoint subnet attachment
VPC Endpoint subnet attachment

We can now input the IP address of our VPC Endpoint in the Storage Gateway creation wizard. Then, click “Next”.

Service Endpoint
Service Endpoint

This brings us to the “Connect to Gateway” window. Here, we can input the IP address assigned to the Storage Gateway VM deployed in VMC. Then, click on “Connect to gateway”.

Connect to Gateway
Connect to Gateway

The next step in the wizard is to activate our Gateway. We can review the pre-compiled fields and optionally assign a Tag to our Gateway. When done, click on “Activate Gateway”.

Activate Gateway
Activate Gateway

We’ll get a confirmation message that our Storage (Volume) Gateway is now active. Additionally, we are presented with the local disk configuration window. In this window we must ensure that one or more disks are allocated to cache the most frequently accessed files locally on the Volume Gateway itself, and at least one disk must be configured as the upload buffer. When done, click on “Configure logging”.

Configure Cache Disk and Upload Buffer
Configure Cache Disk and Upload Buffer

In this example we are not configuring Cloudwatch logging for this Volume Gateway, for this reason we can leave the default of “Disable Logging”. We can now click on “Verify VMware HA” to verify that our Volume Gateway can be correctly protected by VMware HA. In VMC we have both VM level and Host level protection, and all the settings are already pre-configured based on best practices. In VMC, vSphere HA is perfectly configured out-of-the-box to provide high availability to our Volume Gateway. Let’s click on “Verify VMware HA” to actually see this in action.

Gateway Logging
Gateway Logging

We are now getting a message asking us to confirm that we want to test VMware HA and also providing us with a reminder that this step is only needed if the Volume Gateway is deployed on a VMware HA enabled Cluster. Click on “Verify VMware HA”.

Verify VMware HA
Verify VMware HA

This starts the HA test, simulating a failure inside the Volume Gateway VM causing it to be restarted by VMware HA. We are immediately notified that the test is in progress.

HA test in progress
HA test in progress

When the test completes, we are notified that it has completed successfully. We can now click on “Save and continue” to close the wizard.

HA test completed successfully
HA test completed successfully

This brings us back to the AWS Console where we can see that our Volume Gateway (note that the type is reported as “Volume cached”) has been successfully created.

Volume Gateway created successfully
Volume Gateway created successfully

Create a new iSCSI Volume

The next step is to create a Storage Volume to be mounted as block storage by one of our workloads hosted in VMC. We’ll set 10 GiB as the capacity for this example, and check the “New empty volume” radio button. We’ll set a name for our volume in the “iSCSI target name” field, then click “Create volume”.

Create new empty Volume
Create new empty Volume

The wizard automatically brings us to the “Configure CHAP authentication” window. We can skip this configuration if it’s safe for us to accept connections from any iSCSI initiator. If we want to be more accurate, we can add the list of iSCSI initiators authorized to mount this volume, with a shared secret.

Configure CHAP authentication
Configure CHAP authentication

Our new iSCSI volume is now ready to be mounted inside a Guest Operating System running in a VM hosted in VMC. We will use the Target Name (iqn), Host IP and Host port to connect to the iSCSI Target exposed by the Volume Gateway VM, and we will then discover the available volume and mount it. Take note of the Host IP as we are going to use it in a moment.

iSCSI Volume ready
iSCSI Volume ready

Mount the iSCSI volume inside a Windows VM

Let’s now move to a Windows Server VM hosted in VMC, in which we’ll enable to ISCSI Initiator service. Open Server Manager, and from the “Tools” Menu select “iSCSI Initiator”.

Enable iSCSI Initiator
Enable iSCSI Initiator

A dialog window will inform us that the Microsoft iSCSI service is not running. Click “Yes” to enable and start the service.

Start iSCSI Service
Start iSCSI Service

Once the iSCSI Initiator service is running, we can open the iSCSI Initiator management console by selecting Start (the Windows logo) – Windows Administrative Tools – iSCSI Initiator.

Open iSCSI Initiator Management Console
Open iSCSI Initiator Management Console

In the iSCSI Initiator Management Console, move to the “Discovery” tab and click on “Discover Portal”. In the “Discovery Target Portal” window, we can enter the IP address we’ve previously noted in the AWS Console, the one assigned to the volume we’ve just created. We can leave the default TCP Port 3260 and click “OK”.

iSCSI Initiator Properties
iSCSI Initiator Properties

Switching to the “Targets” tab, the iSCSI target iqn of the previously created volume will appear in the list of discovered targets, showing as “Inactive”. This means that we can reach the iSCSI target exposed by the Volume Gateway VM. We must click on “Connect” to actually connecting to the volume.

Discovered iSCSI Target
Discovered iSCSI Target

In the “Connect To Target” window we can accept the default settings and click “OK”.

Connect to iSCSI Target
Connect to iSCSI Target

The discovered target will now be shown as “Connected”. At this point, we are able to mount the volume and create a File System on it.

iSCSI Target Connected
iSCSI Target Connected

The iSCSI Initiator Management Console can be safely closed.
To create a new volume based on the iSCSI device we just discovered, we must open the Disk Management Console. Right click on “Start” and select “Disk Management”.

Disk Management
Disk Management

In the Disk Management Console, click on the “Actions” menu, then on “Rescan Disks”. This will rescan the storage subsystem at the VM’s Operating System level, and will detect any new attached device, such as the volume “presented” by the Volume Gateway VM via iSCSI protocol.

Disk Management - Rescan Disks
Disk Management – Rescan Disks

The iSCSI volume will appear in the list of available disks and it will appear as “Offline”. We can assume it’s the right volume looking at its size, exactly 10 GB as we originally create it in the AWS Console. We must complete some additional steps to make the disk available to our Windows users or applications to host their data.

New Disk - Offline
New Disk – Offline

As a first step, we must bring online the disk. Right Click on it and select “Online”.

New Disk - Online
New Disk – Online

The second step in to initialize the disk to make it usable by Windows. Right click on it and select “Initialize Disk”.

In the “Initialize Disk” window, ensure the disk is selected and choose a partition style option based on your requirements. As we only need a single partition and our volume is quite small, in this scenario it’s safe to leave the MBR (Master Boot Record) option selected. You can read more here about the GPT partition style, and here about the MBR partition style. When done, click on “OK”.

Initialize Disk
Initialize Disk

Now that our disk in online and initialized, we must create a volume on it, formatted with a File System supported by Windows. Right click on the disk we’ve just initialized and select “New Simple Volume…”

New Simple Volume...
New Simple Volume…

We are presented with the “New Simple Volume Wizard” welcome page, here we can click on “Next”.

New Simple Volume Wizard - Welcome Page
New Simple Volume Wizard – Welcome Page

In the second step of the wizard, we are required to assign a drive letter to our volume. I’m choosing “X” as the drive letter in this example. Click “Next” when done.

New Simple Volume Wizard - Assign Drive Letter
New Simple Volume Wizard – Assign Drive Letter

The third step of the wizard requires us to format our volume, we can select “NTFS” as the File System type, leaving “allocation unit size” at its default value, and choosing (optionally) a Volume Label for the volume. Additionally, it is safe to flag the “Perform a quick format” checkbox. When done, click on “Next”.

New Simple Volume Wizard - Format Partition
New Simple Volume Wizard – Format Partition

On the “Completing the New Simple Volume Wizard” page, click “Finish” to complete the creation of the new volume.

New Simple Volume Wizard - Finish
New Simple Volume Wizard – Finish

Our new volume will now be visible in the Windows File Explorer, highlighted by the Label and Drive Letter we set during the creation wizard. In this example, we have our “X:” drive labelled as “FileShare”.

New Volume available in Windows
New Volume available in Windows

Configure vSAN Policy for the Volume Gateway VM

The last step we should make to follow AWS best practices is to reserve all disk space for the Volume Gateway cache and upload buffer disks. AWS recommends to create cache and upload buffer disks with Thick Provisioned format. As we are leveraging vSAN in VMC we don’t have Thick Provisioning available in the traditional sense. We must use Storage Policies to reserve all disk space for the disks. The first step is to go into our vSphere Client and select “Policies and Profiles” from the main Menu.

Policies and Profiles
Policies and Profiles

In the “Policies and Profiles” page, under “VM Storage Policies”, select “Create VM Storage Policy”.

Create VM Storage Policy
Create VM Storage Policy

In the “Create VM Storage Policy”, select a name for the policy and click “Next”.

Storage Policy Name and description
Storage Policy Name and description

In the “Policy Structure” window, set the flag on “Enable rules for vSAN storage”, then click “Next”.

Storage Policy Structure
Storage Policy Structure

In the vSAN window, under “Availability” configuration, we can leave the default settings and switch to the “Advanced Policy Rules” tab.

vSAN - Availability
vSAN – Availability

Once in the “Advanced Policy Rules” tab, we can change the “Object space reservation” field to “Thick provisioning”, leaving all the other fields at their defaults. Then, click “Next”.

vSAN - Advanced Policy Rules
vSAN – Advanced Policy Rules

Select the “WorkloadDatastore” and click “Next”.

Storage Compatibility
Storage Compatibility

In the next window we can review all the settings we have made and click “Finish”.

New Storage Policy - Review and Finish
New Storage Policy – Review and Finish

We can now move to our Volume Gateway Virtual Machine and select “Edit Settings…” under the “ACTIONS” Menu.

Edit Volume Gateway VM settings
Edit Volume Gateway VM settings

Under the “Virtual Hardware” tab, we can now select the hard disks we assigned to the Volume Gateway as the cache and upload buffer volumes, and assign these the newly created Storage Policy. Once done, click “OK”. This will pre-assign all the configured disk space to both disks, replacing the default thin provisioning based policy.

Assign vSAN Policy to Volume Gateway Disks
Assign vSAN Policy to Volume Gateway Disks

One last important thing to mention is how we can save the data hosted in Volumes we create and expose through the Volume Gateway. All the options we have are available in the AWS Console, Storage Gateway services, under “Volumes”. Selecting the Volume we want to work on, under the “Actions” menu we have the option to create an on-demand backup with the AWS Backup managed service, to create a Backup plan with AWS Backup, or to create EBS snapshot (hosted in S3). Snapshots enable us to restore a Volume to a specific point in time, or to create a new Volume based on an existent snapshot.

Create EBS Snapshot
Create EBS Snapshot

This concludes this post.
We have created a Volume Gateway in VMware Cloud on AWS, delivering block disk devices based on iSCSI to our workloads, leveraging S3 as the backend storage.
A volume gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your application servers hosted in VMware Cloud on AWS.
Stay tuned for future content! #ESVR

VMware Cloud on AWS 101 – BlackBoard

Back to basics – VMware Cloud on AWS 101 on a blackboard

Finally back to blogging, and back to the basics.
In this video I use the old classic blackboard to discuss about the Hybrid Cloud challenges and how VMware Cloud on AWS can help you to overcame these challenges.
The first approach to the Cloud is often a lift and shift approach, and VMware Cloud on AWS, paired with HCX, is the solution for a real lift and shift. No downtime, no code changes, all Cloud benefits.

vCloud Director Extender configuration – Tenant side

In this second post about vCloud Director Extender, I’ll guide you through the necessary steps to configure the vCloud Director Extender Service from a Customer (Tenant) perspective.

vCloud Director Extender enables a Tenant to cold or warm migrate its workloads from vSphere to a vCloud Director based Public Cloud. All the easy steps are wizard-driven and the Tenant also has the option to leverage the automatic creation of a L2VPN connection that can stretch the networking between on premises and the vCloud Director Cloud.

You can read vCloud Director Extender release notes here.

vCloud Director Extender Tenant deployment

All the initial steps needed on the Tenant side are the same we’ve seen on the Service Provider side, first you download the vCloud Director Extender OVA file, then you deploy it in your source vCenter. See the Service Provider Setup paragraph in my previous post to view all the steps.
The only difference you must pay attention to is to choose “cx-connector” as the deployment type.
vCloud Director Extender - Architecture

vCloud Director Extender Tenant configuration

Once deployed, you can access the vCloud Director Extender Virtual Appliance via https on the configured IP Address.
You will be presented with the OnPrem Setup page.
Enter your Local or vCenter (SSO) credentials to access the application and start the configuration wizard.
vCloud Director Extender - Tenant Setup

Select “SETUP WIZARD” to start the Service configuration.
vCloud Director Extender - Tenant Setup Wizard

In Step 1, you’ll enter the parameters needed to connect to the source vCenter. Then click “Next”.
vCloud Director Extender - OnPrem vCenter

Wait for the confirmation message, then click “next”
vCloud Director Extender - OnPrem vCenter OK

In Step 2, you confirm the registration of the vCloud Director Extender as a plugin in the source vCenter, then click “Next”.
vCloud Director Extender - Register Plugin

Wait for the confirmation message, then click “Next”.
vCloud Director Extender - Plugin OK

In Step 3, provide the parameters needed to configure the Tenant Replicator service, then click “Next”.
vCloud Director Extender - Tenant Replicator

Wait for the confirmation message, then click “Next”.
vCloud Director Extender - Replicator OK

In Step 4, you provide the parameters needed to activate the Replicator, then click “Next”.
vCloud Director Extender - Activate Replicator

Wait for the confirmation message, then click “Next”.
vCloud Director Extender - Activate Replicator OK

In Step 5, we’ve finished the OnPrem Setup. Click “Finish”.
vCloud Director Extender - Finish

After the initial Wizard that provides the connection to the source vCenter and the Replicator Service setup, you must access the “DC Extensions” tab to provide necessary parameters to deploy the L2VPN Appliance.
If NSX Manager is deployed on Premises, it is mandatory to choose “ADD NSX CONFIGURATION”.
In our scenario, we don’t have NSX on Premises so we’ll choose “ADD APPLIANCE CONFIGURATION” in the L2 Appliance Configuration section.
vCloud Director Extender - L2VPN Appliance Deploy

Provide the needed parameters to deploy the L2VPN Appliance. Pay attention to the following fields: Uplink Network, which maps to the PortGroup that grants Internet connectivity to the appliance, and Uplink Network Pool IP, which is the source IP Address used to connect to the L2VPN Server. Click “Create”.
vCloud Director Extender - L2VPN Appliance Creation

Wait for the confirmation message that confirms the L2 Appliance configuration.
vCloud Director Extender - L2VPN Appliance Setup OK

This concludes the configuration steps for the L2VPN appliance.
Accessing the Web Client, the Tenant can now configure L2 Extensions and manage workloads migration to the Cloud.

 

vCloud Director Extender Tenant operations

After the configuration steps ends, you can find a new Service registered in the source vCenter inventory: vCloud Director Extender. Click on the icon to launch the Management page for the Service.
vCloud Director Extender - Web Client Plugin

On the vCloud Director Extender management page, you can find two dashboard that show you the overall Migration Health and the DC Extension Status for the L2VPNs.
Select “New Provider Cloud” to connect to your Service Provider.
vCloud Director Extender - Web Client UI

Provide a descriptive name for the target Cloud, the URL of the target vCloud Director Organization for the Tenant, the URL of the target Extender Cloud Service (provided by the Service Provider) and finally your Org Admin credentials. Click “Test” to test the connection, wait for the confirmation message  then click “Add”vCloud Director Extender - Add Provider

You can now see your target vCloud Director Organization appearing in the Provider Clouds tab.
vCloud Director Extender - Provider Running

We’ll now create a new L2 Extension from onPrem to the Cloud. Access the DC Extensions tab and click on “New Extension”.
vCloud Director Extender - New Extension

Enter a name for this extension, select the source Datacenter, the source Network, the target Provider Cloud, vDC and Org Network. The “Enable egress” option enables you to have a local default gateway in each site with the same IP address, to optimize Egress traffic. With Egress optimization enabled, packets sent towards the Egress optimization IP will be routed locally by the Edge, everything else will be sent thru the bridge.
Click “Start” to enable the connection and make the L2 extension.
vCloud Director Extender - New Extension Start

In the vSphere Web Client task console, you can view the “Trunk” Port Group being created with a SINK port. You can also see the Standalone Edge deployment is in progress.
vCloud Director Extender - Task Console

After the tasks complete, you can see the L2VPN status as “Connected”. L2 Extension beetween the source and the target network is in place, so you can safely migrate your workloads to the Cloud without change in IP addressing, keeping the same connectivity you have on Premises. This is really Hybrid Cloud!
vCloud Director Extender - L2VPN Connected

In the vCloud Director Extender Home, you can now see the DC Extension Status dashboard showing the L2VPN Tunnel is in place.
vCloud Director Extender - L2VPNClient Connected Dashboard

If we look at the L2VPN Statistics in vCloud Director, we can see the Tunnel Status as “up”.
vCloud Director Extender - L2VPN Connected vCD

It’s now time to migrate a workload to the Cloud leveraging this new L2VPN Tunnel to keep connectivity with on Premises. Access the Migrations tab and click on “NEW MIGRATION”.
vCloud Director Extender - New Migration

Select the type of migration you want to perform: Cold migration requires the source VM to be powered off while Warm migration enables you to keep your VM runnning on Premises, starting a continuous file sync to the Cloud and completing the cutover when replica is completed. As the wizard highlight, Warm migration is not a vMotion. Click “Next” after the selection.
vCloud Director Extender - Cold Warm

Select the source VM(s), then click “Next”. You can select more than one VM for each migration job.
vCloud Director Extender - Select VM

Specify the target Cloud parameters: target Cloud, vDC, Storage Profile, Org. Network and vApp layout to create if you are migrating more than one VM. Click “Next” when finished.
vCloud Director Extender - Target Parameters

Specify when you want to start the synchronization, the target RPO and the disk type (thin, thick). You can additionally specify a Tag for this migration job. When finished, click “Start”.
vCloud Director Extender - Migration Finish

When the synchronization finishes, the workload will have a Status named “Cutover Ready”. This means that you can start the cutover process, that will Power Off the source VM and will Power On the VM in the Cloud. Click “Start Cutover” to specify the cutover parameters and start the process .
vCloud Director Extender - Cutover Ready

Specify the target cloud, the desired final power status of the target VM after cutover, then click “Start”.
vCloud Director Extender - Cutover Start

The workload Status will became “Completed” once the Cutover finishes.
vCloud Director Extender - Cutover Completed

The migrated VM will be powered off on Premises.
vCloud Director Extender - VM off onPremises

On the target vCloud Director, we’ll find the migrated VM powered on.
vCloud Director Extender - VM in Cloud

Let’s use PING to test connectivity between VM1, still on Premises, and VM2, migrated to the Cloud. The connection will leverage the L2 Extension between on Premises and the Cloud. (Note: DUP! packets message occurs because I’m working in a nested environment).
vCloud Director Extender - Ping Succeed

There’s a 1:1 mapping between source VLAN and target VXLAN when you configure Datacenter Extension in vCloud Director Extender.
To stretch multiple VLANs you must create different Extensions in vCD Extender.

To show this let’s create a new PortGroup on Premises and a new Org vDC Network in the Cloud to see what happens when we need to create an additional network extension.

We configure a new Extension, mapping a local VLAN to the target Org vDC Network. The Status will show as “Connected” when the creation process finishes.
vCloud Director Extender - New L2 Stretch

Looking at the changes automatically made in vCloud Director, we’ll find the new Org Network added as a stretched interface to the existing Site Configuration.
vCloud Director Extender - ESG New L2VPN Config

This concludes the CX Service On Prem configuration.

vCloud Director Extender configuration – Service Provider side

Soon after the release of vCloud Director 9.0, VMware has released the replacement for vCloud Connector, a new tool named vCloud Director Extender.

vCloud Director Extender enables a Tenant to cold or warm migrate its workloads from vSphere to a vCloud Director based Public Cloud. All the easy steps are wizard-driven and the Tenant has also the option to leverage the automatic creation of a L2VPN connection that can stretch the networking between on premises and the vCloud Director Cloud.

vCloud Director Extender works with vCloud Director 8.20.x and vCloud Director 9.0

You can read the Release Notes for version 1.0 here.

In this first post about vCloud Director Extender, I’ll guide you through the necessary steps to configure the vCloud Director Extender Service from a Service Provider perspective.

vCloud Director Extender Architecture

Before to start, I want to show you the architecture of the Service:

vCloud Director Extender - Architecture

On the Provider Side, we have the following components:

  • vCloud Director Extender: the Virtual Appliance that you download and deploy, known as “CX Cloud Service”. After its deployment and configuration, it is used to provide setup and configuration of the overall CX Service.
  • Cloud Continuity Manager (aka “Replication Manager”): this Virtual Appliance is deployed by the CX Cloud Service and its role is to oversee the work done by the Replicator.
  • Cloud Continuity Engine (aka “Replicator”): this Virtual Appliance is deployed by the CX Cloud Service and its role is to manage the VMs replication between the Tenant’s vSphere environment and the Service Provider’s vCloud Director. The Replicator leverages the new H4 Replication Engine.

On the Tenant side, we only need vCloud Director Extender and the Cloud Continuity Engine.

Let’s start now with the installation and configuration steps on the Service Provider side.

vCloud Director Extender Service Provider Setup

The first step is to access myVMware Website and to download the vCloud Director Extender OVA file, located under the “Drivers & Tools” section of the VMware vCloud Director 9.0 download page.vCloud Director Extender - myVMware

myVMware

Following the “Go to Downloads” link you’ll find the vCloud Director Extender 1.0.0 download page.

vCloud Director Extender - Download

The next step is to deploy the OVA file we just downloaded. Select the target vCenter (tipically the Management Cluster vCenter) and select “Deploy OVF Template”.

vCloud Director Extender - Deploy OVF part 1

Choose “Browse” to select a local file.
vCloud Director Extender - Deploy OVF part 2

Choose the OVA file you download previously from the myVMware Website and select “Open”. Once you’re back on the Select Template page, click “Next”.
vCloud Director Extender - Deploy OVF part 3

Choose a name for the vCloud Director Extender Virtual Appliance as you want it to appear in your vCenter inventory, then click “Next”.
vCloud Director Extender - Deploy OVF part 4

Select a Target Cluster/Host and click “Next”.
vCloud Director Extender - Deploy OVF part 5

Click “Next” on the Review details page.
vCloud Director Extender - Deploy OVF part 6

Click “Accept” on the EULA page after reading it, then click “Next”.vCloud Director Extender - Deploy OVF part 7

Select a virtual disk format, a VM storage policy and a target datastore for the Virtual Appliance, then click “Next”.vCloud Director Extender - Deploy OVF part 8

Select a destination Network (PortGroup) for the Virtual Appliance, then click “Next”.vCloud Director Extender - Deploy OVF part 9

In the “Customize Template” tab, you’ll set all the Virtual Appliance Parameters.
In the Service Provider environment, based on vCloud Director, you must choose the deployment type “cx-cloud-service“.
vCloud Director Extender - Deploy OVF part 10

Click “Finish” after having reviewed your configuration, to deploy the Virtual Appliance.
vCloud Director Extender - Deploy OVF part 11

vCloud Director Extender Service Provider Setup

Once deployed, you can access the vCloud Director Extender Virtual Appliance via https on the configured IP Address.
You will be presented with the Cloud Service Setup page.
Enter your Local or vCenter (SSO) credentials to access the application and start the configuration wizard.
vCloud Director Extender - Configuration Wizard

Select “SETUP WIZARD” to start the Service configuration.
vCloud Director Extender - Setup part 1

In Step 1, you’ll enter the parameters needed to connect to the Management vCenter. Then click “Next”.vCloud Director Extender - Setup part 2

In Step 2, provide the parameters needed to connect to your vCloud Director instance, then click “Next”.vCloud Director Extender - Setup part 3

In Step 3, provide the parameters needed to connect to your Resource vCenter(s), then click “Next”.
vCloud Director Extender - Setup part 4

Wait for the “Successfully linked Resource vCenter” confirmation message, then click “Next”.
vCloud Director Extender - Setup part 5

In Step 4, specify the parameters needed to create the Replication Manager Virtual Appliance, then click “Next”.
vCloud Director Extender - Setup part 6

You will see a progress bar indicating the Replication Manager creation status.
vCloud Director Extender - Replication Manager creation

In Step 5, set the Root password for the Replication Manager Appliance, specify the Public Endpoint URL needed to reach the Service (optional, only needed if the Appliance is behind a Proxy/NAT), then click “Next”.
vCloud Director Extender - Setup part 7

Wait for the activation confirmation message, then click “Next”.
vCloud Director Extender - Setup part 8

In Step 6, specify the parameters needed to create the Replicator Virtual Appliance, then click “Next”.
vCloud Director Extender - Setup part 9

You will see a progress bar indicating the Replicator creation status.
vCloud Director Extender - Replicator creation

In Step 7, set the Root password for the Replicator Appliance, specify Lookup Service URL and credentials for the Resource vCenter and the Public Endpoint URL needed to reach the Service (optional, only needed if the Appliance is behind a Proxy/NAT), then click “Next”.
vCloud Director Extender - Setup part 10

Step 8 will conclude the Wizard. Click “Finish”.
vCloud Director Extender - Setup part 11

vCloud Director Extender – Service Provider L2VPN Server Setup

The last step to enable the Service, only necessary if L2 stretching is needed between the on premises environment and vCloud Director, is to configure the L2VPN Service on the target Organization Virtual Datacenter(s) Edge Gateway(s).
To create L2VPN connections, you need to convert the Edge Services Gateway(s) to Advanced and grant the needed rights to the vCloud Organization.
You can read one of my previous posts, Self Service NSX Services in vCloud Director, to understand how this works and how to complete this part of the configuration, if needed.

At this stage you can configure the L2VPN Server on the Tenant Edge Gateway (this can be done by the Service Provider or can be delegated to the Customer).
L2VPN Server

When you configure an L2VPN Server, you must configure a Peer Site. You’ll configure a dummy Peer Site at this stage, just to conclude the Setup on the Tenant side. We’ll could leave this Peer site disabled because we won’t use it, it will be vCloud Director Extender on the Tenant side to configure the needed Peer Sites on this Edge Gateway.
L2VPN Server - Dummy Peer Site

This concludes the Service Provider side of the vCloud Director Extender Service configuration.
In the next post I’ll show you how to configure the CX Service on the Tenant side.