Tanzu Kubernetes Grid on VMware Cloud on AWS

A few days ago I had the opportunity to prepare a Tanzu Kubernetes Grid environment on VMware Cloud on AWS for a customer demo, and I thought it would have made sense to make a blog post on this.

TKG deployment is pretty straightforward, but to give deployment time a further boost I decided to leverage the Demo Appliance for Tanzu Kubernetes Grid. The Demo Appliance for TKG has been released a few days ago as a VMware Fling.

Kudos to William Lam for the release of this Fling, a really powerful way to speed up the installation of TKG and go straight to showcase its power and flexibility.

For the scope of the demo I had to deliver, I decided to follow William’s demo script, provided with the Fling’s instructions, as I believe it really fits for my demo purpose.

Tanzu Kubernetes Grid Overview

VMware Tanzu Kubernetes Grid is an enterprise-ready Kubernetes runtime that packages open source technologies and automation tooling to help you get up and running quickly with a scalable, multi-cluster Kubernetes environment.

In VMware Cloud on AWS, the choice has been made to support the “Tanzu Kubernetes Grid Plus” offering. The “Plus” is an addon to a traditional TGK installation which means the customer will get support for several additional open-source products, together with support from a specialized team of Customer Reliability Engineers (CRE) dedicated to support TKG+ customers. In this knowledge base article you can find a detailed comparison between TKG and TKG+.

For further details on the solution, you can browse to the official Tanzu Kubernetes Grid page, and read the VMware Tanzu Kubernetes Grid Plus on VMware Cloud on AWS solution brief.

To setup TKG, we are leveraging a bootstrap environment that can live on your laptop, in a VM, in a Server etc. and it’s based on kind (Kubernetes in Docker). Basically, with kind we are leveraging Kubernetes to deploy our TGK Management Cluster. Using the Demo Appliance for Tanzu Kubernetes Grid, we have all the components required to setup the bootstrap environment ready to be used, and we can simply focus on the TGK deployment itself. I really hope to see the Demo Appliance being productized and as such becoming supported to deploy production environments.

Once the TKG Management Cluster is in place, we will leverage Cluster API through the TKG Command Line Interface to manage the creation and lifecycle of all our TKG Workload Cluster where our applications will live. I believe the following picture could be well descriptive of the concept: we leverage Kubernetes to create a Management Cluster, and we’ll manage all our Workload Clusters from the Management Cluster. (Thanks again to William for the nice diagram).

TKG on VMC conceptual architecture
TKG on VMC conceptual architecture

For the scope of this article, I’ve already satisfied the little amount of networking and security prerequisites you need to implement in your VMC SDDC before deploying TGK.

Now, let’s go straight to the steps needed to implement TKG on VMC and see it in action: from 0 to Kubernetes on VMC in less than 30 minutes!

Setup Content Library to sync all TKG Virtual Appliance templates

Open the SDDC’s vCenter and from the “Home” screen, select “Content Libraries”.

vSphere Client - Content Library
vSphere Client – Content Library

Click on the “+” button to start the “New Content Library” wizard. In the “Name and location” window, we can enter the name of the new Content Library and click “NEXT”.

New Content Library
New Content Library

In the “Configure content library” window, check the “Subscribed content library” radio button, input the following subscription URL: “https://download3.vmware.com/software/vmw-tools/tkg-demo-appliance/cl/lib.json“, check the option to download content immediately, then click “NEXT”.

Subscribe to Content URL
Subscribe to Content URL

Accept the authenticity of the subscription Host clicking “YES”.

Trust the subscription Host
Trust the subscription Host

In the “Add storage” window, select the “WorkloadDatastore”, then click “NEXT”.

Content Library - Storage
Content Library – Storage

In the “Ready to complete” window, review the settings then click “FINISH”.

New Content Library - review settings
New Content Library – review settings

The new Content Library is now created, select it and move to the “OVF & OVA Templates” tab, where we can wait for our templates to be downloaded.

TGK Demo Content Library
TGK Demo Content Library

Once the download is completed, right-click on the “TKG-Demo-Appliance_1.0.0.ova” and select “New VM from This Template”. This will start the new VM creation wizard.

Deploy the Demo Appliance for Tanzu Kubernetes Grid

New VM from template
New VM from template

Select a name for the new Virtual Machine, select a location (VM Folder) then click “NEXT”.

New VM - Name and Folder
New VM – Name and Folder

Select the destination Resource Pool (by default, Compute-Resource-Pool in VMC) and click “NEXT”.

Select Compute Resource
Select Compute Resource

Review the details and, if everything is fine, click on “NEXT”.

New VM - Review Details
New VM – Review Details

Select “WorkloadDatastore” as the target storage, optionally choose a specific Storage Policy to apply to the new VM, then click “NEXT”.

New VM - Select Storage
New VM – Select Storage

Select the destination network for the new VM, then click “NEXT”.

New VM - Select Network
New VM – Select Network

In the “Customize template” window, configure the new VM IP address, Gateway, DNS, NTP, password, optional proxy Server, then click “NEXT”.

New VM - Customize template
New VM – Customize template

Review all the settings, then click “FINISH” to start the TKG Demo Appliance VM creation.

New VM - Ready to complete
New VM – Ready to complete

Before moving to the Management Cluster setup, we need to have in place two Virtual Machine Templates that will be used to deploy the environment. Back to the same Content Library we created before, where we have our TKG Demo Appliance template, we must deploy two new VMs by using the templates “photon-3-capv-haproxy-v0.6.3_vmware.1” and “photon-3-v1.17.3_vmware.2” and, once the two VMs are created, convert them to vSphere Templates, as we’ll need these in the next steps. Once done, we are ready to proceed.

TKG Management Cluster Setup

We can locate our newly created Demo Appliance for TKG in the vCenter Inventory, together with all the others VMs and Templates. We need to take note of the IP address or DNS name assigned to the TKG Demo Appliance as we’ll access it with SSH and we’ll configure everything we need by using both the tkg and kubectl CLI.

Demo Appliance for TKG in vCenter
Demo Appliance for TKG in vCenter

Now that we have the IP address or the DNS name of the Appliance, let’s connect to it via SSH. Once connected, we can start the setup of our TKG Management Cluster typing the command “tkg init –ui” followed by Enter. This will start the TKG Management Cluster setup. As the Demo Appliance doesn’t have a UI, we’ll need to open a second session to it, setting up SSH port redirection to being able to use our local browser to access the TKG Installer Web UI.

SSH connection to Demo Appliance for TKG
SSH connection to Demo Appliance for TKG

Once port redirection is in place, we can access the TKG Installer wizard by navigating to “http://localhost:8080” on our local machine. Here, we can select the “DEPLOY ON VSPHERE” option.

TKG Installer – Deploy on vSphere

The first step is to input the vCenter Server IP or DNS name, and credentials, then click “CONNECT”.

TKG Installer - Connect to vCenter
TKG Installer – Connect to vCenter

We can safely ignore the notification about vSphere 7.0.0 and click “PROCEED”. The reason we are getting this information message is that VMware Cloud on AWS, following a different release cycle than the commercial edition of vSphere, already have vSphere 7 deployed in its SDDCs.

TKG Installer - vSphere 7 notification
TKG Installer – vSphere 7 notification

The second step is to select the vSphere Datacenter inventory item where we want to deploy our TKG Management Cluster. I’m not providing a real SSH Public Key in my example, but we should provide a “real” key if we want to easily connect to any Kubernetes node once deployed. Click “NEXT”.

TKG Installer - Iaas Provider
TKG Installer – Iaas Provider

In the “Control Plane Settings”, we can choose between several instance types and sizes based on our requirements. For the goal of this demo, it’s safe to pick the “Development” (single control plane node) flavour and the “medium” instance type. Input a name for the Management Cluster and select the API Server Load Balancer template. This will be the template we’ve created before starting from the “photon-3-capv-haproxy-v0.6.3_vmware.1” image. Click “NEXT” when done.

TKG Installer - Control Plane Settings
TKG Installer – Control Plane Settings

In the next step we must select the Resource Pool, VM Folder and Datastore where the Management Cluster will be created. Then click “NEXT”.

TKG Installer - Resources
TKG Installer – Resources

Select the SDDC network that will host our TKG VMs, leave the default CIDR selected for the Cluster Service CIDR and the Cluster Pod CIDR if you don’t have any specific requirement to change the default values. Click “NEXT” when done.

TKG Installer - Kubernetes Network
TKG Installer – Kubernetes Network

Select the vSphere Template configured with the required Kubernetes version. This is the template we’ve created before starting from the “photon-3-v1.17.3_vmware.2” image we’ve imported in our Content Library. Click “NEXT” when done.

TKG Installer - OS Image
TKG Installer – OS Image

With all the required fields compiled (green check mark), we can click on “REVIEW CONFIGURATION”.

TKG Installer - Review Configuration
TKG Installer – Review Configuration

We can then click on “DEPLOY MANAGEMENT CLUSTER” to start the deployment of our TKG Management Cluster. This will approximately take 6 to 8 minutes.

TKG Installer - Start Deployment
TKG Installer – Start Deployment

After all the configuration steps are completed, we will get an “Installation complete” confirmation message. Our TKG Management Cluster is now ready to be accessed. It’s safe to close the browser window and move back to the Demo Appliance for TGK SSH session we have previously opened.

TKG - Installation Complete
TKG – Installation Complete

Looking at the vCenter Inventory, we can easily see the Management Cluster VMs we’ve just deployed.

vCenter Inventory – TKG Management Cluster

Back in the SSH session, we automatically found ourselves in the context of the TKG Management Cluster. Here, leveraging the TKG Command Line Interface, we can create and manage the lifecycle of our Workload Clusters. Let’s create our first Workload Cluster using the command “tgk create cluster –plan=<dev_or_prod> <cluster-name>“. In my example I’m using the “dev” template and a cluster name of “it-tkg-wlc-01“. You can see in the low right end of the screenshot the VMs being deployed with the chosen name.

TKG - Create Workload Cluster
TKG – Create Workload Cluster

Once our Kubernetes Workload Cluster is created, we automatically find ourselves in the context of the new Cluster and we can immediately start deploying our applications.

TKG - Workload Cluster Created
TKG – Workload Cluster Created

I would like to leverage the YELB application for this Kubernetes demo, to deploy the application I’m starting by cloning the YELB git repository to my local machine.
Then, from the directory where we cloned the git repository (in my case “~/demo/yelb“), I create a new namespace with the command “kubectl create namespace yelb” followed by the resource creation with the command “kubectl apply -f yelb.yaml“.

YELB application resources deployed
YELB application resources deployed

As we have chosen to deploy a Kubernetes Cluster with only a single Control Plane and a single Node, it’s easy to get the IP address needed to access our application as the Control Plane doesn’t host running pods. With the command “kubectl get nodes -o wide” we can obtain the external IP address of our single node, which for sure is hosting the running yelb pods.

Get node external IP address
Get node external IP address

Once we have the external IP address of the node hosting the application, we can point our browser to that IP on port 30001, and here we can see our application is working and ready for the demo.

The YELB application
The YELB application

This concludes this post, we’ve seen how we can quickly deploy Tanzu Kubernetes Grid on top of our VMware Cloud on AWS Infrastructure.
This enables us to have a very powerful solution where Virtual Machines, Containers (orchestrated by Kubernetes) and native AWS services can coexist and be very well integrated to design your application modernization strategy.

In one of the next blog posts, I’ll show you how we can leverage such an integrated solution to quickly lift and shift an application, and subsequently modernize it.

Stay tuned! #ESVR

AWS Volume Gateway integration with VMware Cloud on AWS

Cloud-based iSCSI block storage volumes for Workloads in VMware Cloud on AWS

In this follow-up article I’d like to show you again how the native integration between AWS Services and VMware Cloud on AWS can provide you a lot of powerful capabilities.

We can leverage the AWS Storage Gateway – Volume Gateway to provide our workloads with Cloud-based iSCSI block storage volumes. This enables us to re-think our approach to the Cloud when it comes to migrate storage and store backups.

The most common use cases for the Volume Gateway are: Hybrid File Services, Backup to Cloud, DR to Cloud, Application migration.

Architecture and Service Description

In the following picture you can see the Architecture of the solution we are about to implement.
AWS Storage Gateway is a Virtual Appliance that exposes iSCSI block volumes to VMware workloads. It has been historically deployed on-premises, but now that we have VMware Cloud on AWS, we can take advantage of the high speed and low latency connection provided by the ENI that connects our SDDC with all the native AWS services in the Connected VPC.
AWS Volume Gateway comes in two modes: stored and cached. In stored mode, the entire data volume is available locally and asynchronously copied to the cloud. In cached mode, the entire data volume is stored in the cloud and frequently accessed portions of the data are cached locally by the Volume Gateway appliance.
By “stored in the Cloud” in this context I mean S3. Volume Gateway is a Managed Platform Service that is using S3 in the backend, even if S3 is completely abstracted from the customer. For this reason, we’ll not be able to see which bucket is in use by the Volume Gateway nor to manipulate in any way the S3 objects.
If you’re looking for a solution that maps your files 1:1 with S3 objects, check out my previous blog post about the File Gateway.

Basically, with the Volume Gateway we are providing iSCSI block storage volumes backed by S3 to our workloads hosted in VMware Cloud on AWS.

This is the High Level Architecture of the solution we are implementing:

AWS Volume Gateway with VMware Cloud on AWS integration
AWS Volume Gateway with VMware Cloud on AWS integration

From a performance perspective, AWS recommends the following for its Storage Gateway appliances: https://docs.aws.amazon.com/storagegateway/latest/userguide/Performance.html#performance-fgw

From an high availability perspective, we can leverage vSphere HA to provide high availability to the Volume Gateway. You can read more about this feature here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Performance.html#vmware-ha
We’ll test vSphere HA with Volume Gateway later, during the deployment wizard.

Preliminary Steps

Get the VPC Subnet and Availability Zone where the SDDC is deployed

We need to accomplish some preliminary steps to gather some information about our SDDC, that we’ll need later. In addition, we need to configure some Firewall Rules to enable communication between our SDDC and the Connected VPC where we’ll configure our Gateway Endpoint.

As a first step, we need to access our VMware Cloud Services console and access VMware Cloud on AWS.

VMware Cloud Services
VMware Cloud Services

The second step is to access our SDDC clicking on “View Details”. Alternatively, you can click on the SDDC name.

VMware Cloud on AWS SDDC
VMware Cloud on AWS SDDC

Once in our SDDC, we need to select the “Networking & Security” tab.

SDDC details
SDDC details

In the “Networking & Security” tab, we must head to the “Connected VPC” section, where we can find the VCP subnet and AZ that we did choose upon deployment of the SDDC. Our SDDC resides there, therefore every AWS service we will configure in this same AZ will not cause us any traffic charge. We need to keep note of the VPC subnet and AZ as we’ll need this information later.

SDDC Networking & Security
SDDC Networking & Security

Create SDDC Firewall Rules

The second preliminary step we need to perform is to enable bi-directional communication between our SDDC and the Connected VPC through the Compute Gateway (CGW). I’ll not go through the details of the Firewall Rules creation in this post, but simply highlight the result: for the sake of simplicity, in this example we have a rule allowing any kind of traffic from the Connected VPC Prefixes and S3 Prefixes to any destination, and vice-versa. As you can see, both rules are applied to the VPC Interface which actually is the cross-Account ENI connecting the SDDC to the Connected VPC.
If we would like to configure more granular security, we could do this leveraging the information highlighted in the AWS documentation here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Resource_Ports.html

Compute Gateway Firewall Rules
Compute Gateway Firewall Rules

Let’s now have a look at the actual implementation of the Volume Gateway in VMC and how it works.

Create the Storage Gateway VPC Endpoint

First, we need to access the AWS Management Console for the AWS Account linked to the VMware Cloud on AWS SDDC and select “Storage Gateway” from the AWS Services (hint: start typing in the “Find Services” field and the relevant services will be filtered for you). Make sure you are connecting to the right Region where your SDDC and Connected VPC are deployed.

AWS Management Console
AWS Management Console

If you don’t have any Storage Gateway already deployed, You will be presented with the Get Started page. Click on “Get Started” to create your Storage Gateway. (hint: if you already have one or more Storage Gateways deployed, simply click on “Create Gateway” in the landing page for the service).

AWS Storage Gateway - Getting Started Page
AWS Storage Gateway – Getting Started Page

You will be presented with the Create Gateway wizard. The first step is to choose the Gateway type. In this scenario, we are focusing on iSCSI block volumes and we will select “Volume Gateway”. We’ll additionally select “Cached Volumes” to benefit from low-latency local access to our most frequently accessed data, and then click “Next”.

Volume Gateway - Cached
Volume Gateway – Cached

The next step is to download the OVA image to be installed on our vSphere Environment in VMC. Click on “Download Image”, then click “Next”.

Download Storage Gateway Image for ESXi
Download Storage Gateway Image for ESXi

Deploy the Storage Gateway Virtual Appliance in VMware Cloud on AWS

Now that we have download the ESXi image, we’ll momentarily leave the AWS Console and move to our vSphere Client, to install the Storage Gateway Virtual Appliance. I’m assuming here that we have the VMware Cloud on AWS SDDC already deployed and we have access to our vCenter in the Cloud. SDDC deployment is covered in detail in one of my previous posts here.
Head to the Inventory Object where you want to deploy the Virtual Appliance (e.g. Compute-ResourcePool), right click and select “Deploy OVF Template…”

Deploy OVF Template
Deploy OVF Template

Select the previously downloaded Virtual Appliance. This is named “aws-storage-gateway-latest.ova” at the time of this writing. Click “Next”.

Choose Transit Gateway OVA
Choose Transit Gateway OVA

Provide a name for the new Virtual Machine, then click “Next”.

Provide Virtual Machine Name
Provide Virtual Machine Name

Confirm the Compute Resource where you want to deploy the Virtual Appliance (e.g. Compute-ResourcePool). Then, click “Next”.

Select Compute Resource
Select Compute Resource

In the “Review details” page, click “Next”.

Deploy OVF Template - Review details
Deploy OVF Template – Review details

Select the Storage that will host our Virtual Appliance. In VMware Cloud on AWS this will be “WorkloadDatastore”. Click “Next”.

Workload Datastore
Workload Datastore

Select the destination network for the Virtual Appliance and click “Next”.

Destination Network
Destination Network

In the “Ready to Complete” window, click “Finish” to start the creation of the Storage Gateway Virtual Appliance.

Ready to complete
Ready to complete

We now have our Storage Gateway Appliance in the SDDC’s vCenter inventory. Let’s edit the VM to add some storage to be used for caching. To clarify, in addition to the 80 GB base VMDK, the Storage Gateway Appliance must have at least two additional VMDKs of at least 150 GB in size each, one to be used for caching and another one to be used as an upload buffer. You can see all the Storage Gateway requirements here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.html
Select the Volume Gateway VM, select “ACTIONS” then “Edit Settings…”.

Storage Gateway Virtual Appliance - Edit Settings
Storage Gateway Virtual Appliance – Edit Settings

In the “Edit Settings…” window, under Virtual Hardware, add two new hard disk devices by clicking on “ADD NEW DEVICE” and selecting “Hard Disk”.

Add Hard Disks to Volume Gateway
Add Hard Disks to Volume Gateway

Select a size of at least 150 GB for each of the new disks. Then click “OK”.

Set new Hard Disk size
Set new Hard Disk size

Create VPC Endpoint for Storage Gateway

We can now switch back to the AWS Console, where we should be in the “Service Endpoint” page of the Storage Gateway deployment wizard. In case we’re still in the “Select Platform” window, we can simply click “Next”. As we want to have a private, direct connection between the Storage Gateway vApp and the Storage Gateway Endpoint, we will select “VPC” as our Endpoint Type. Click on the “Create a VPC endpoint” button to open a new window where we can create our endpoint.
A VPC Endpoint is a direct private connection from a VPC to a native AWS Service. With a VPC Endpoint in place, we don’t need an Internet Gateway, NAT Gateway or VPN to access AWS Services from inside our VPC, and instances in the VPC do not require public IP addresses.
A VPC Endpoint for Storage Gateway is based on the PrivateLink networking feature and it is an Interface-based (ENI) Endpoint.
If you have already created a Storage Gateway Endpoint based on my previous blog post on File Gateway integration with VMware Cloud on AWS, you can skip the next steps and input directly the VPC endpoint IP address or DNS name in the “VPC endpoint” field.

Service Endpoint
Service Endpoint

In the “Create Endpoint” wizard, we have a couple of choices we must make for our Storage Gateway Endpoint: Service category will be “AWS Services”, then we’ll select the same AZ and subnet where our SDDC is deployed (note: we could select more than one AZ and subnet for better resilience of the endpoint, but we would potentially incur in cross-AZ charges and it could make no sense to have cross-AZ resiliency of the Volume Gateway, unless we also deploy our SDDC in a Stretched Cluster configuration between two AZs). Lastly, we can leave the default security group selected and click on “Create endpoint”.

Create Storage Gateway Endpoint
Create Storage Gateway Endpoint

Once the deployment is finished, we’ll be able to see our VPC Endpoint available in the AWS Console. You can see here that the Endpoint type is “Interface”.

VPC Endpoint in the AWS Console
VPC Endpoint in the AWS Console

We can now switch back to the Volume Gateway creation wizard, but before that we must take note of the IP address assigned to our Storage Endpoint. We could use either the DNS name or the IP address to configure our Storage Gateway, I’m choosing to use the IP address in this example, let’s see where we can find the IP address assigned to the ENI (Storage Endpoint). This is visible in the “Subnets” tab, where one ENI is created for each Subnet the VPC Endpoint is attached to.

VPC Endpoint subnet attachment
VPC Endpoint subnet attachment

We can now input the IP address of our VPC Endpoint in the Storage Gateway creation wizard. Then, click “Next”.

Service Endpoint
Service Endpoint

This brings us to the “Connect to Gateway” window. Here, we can input the IP address assigned to the Storage Gateway VM deployed in VMC. Then, click on “Connect to gateway”.

Connect to Gateway
Connect to Gateway

The next step in the wizard is to activate our Gateway. We can review the pre-compiled fields and optionally assign a Tag to our Gateway. When done, click on “Activate Gateway”.

Activate Gateway
Activate Gateway

We’ll get a confirmation message that our Storage (Volume) Gateway is now active. Additionally, we are presented with the local disk configuration window. In this window we must ensure that one or more disks are allocated to cache the most frequently accessed files locally on the Volume Gateway itself, and at least one disk must be configured as the upload buffer. When done, click on “Configure logging”.

Configure Cache Disk and Upload Buffer
Configure Cache Disk and Upload Buffer

In this example we are not configuring Cloudwatch logging for this Volume Gateway, for this reason we can leave the default of “Disable Logging”. We can now click on “Verify VMware HA” to verify that our Volume Gateway can be correctly protected by VMware HA. In VMC we have both VM level and Host level protection, and all the settings are already pre-configured based on best practices. In VMC, vSphere HA is perfectly configured out-of-the-box to provide high availability to our Volume Gateway. Let’s click on “Verify VMware HA” to actually see this in action.

Gateway Logging
Gateway Logging

We are now getting a message asking us to confirm that we want to test VMware HA and also providing us with a reminder that this step is only needed if the Volume Gateway is deployed on a VMware HA enabled Cluster. Click on “Verify VMware HA”.

Verify VMware HA
Verify VMware HA

This starts the HA test, simulating a failure inside the Volume Gateway VM causing it to be restarted by VMware HA. We are immediately notified that the test is in progress.

HA test in progress
HA test in progress

When the test completes, we are notified that it has completed successfully. We can now click on “Save and continue” to close the wizard.

HA test completed successfully
HA test completed successfully

This brings us back to the AWS Console where we can see that our Volume Gateway (note that the type is reported as “Volume cached”) has been successfully created.

Volume Gateway created successfully
Volume Gateway created successfully

Create a new iSCSI Volume

The next step is to create a Storage Volume to be mounted as block storage by one of our workloads hosted in VMC. We’ll set 10 GiB as the capacity for this example, and check the “New empty volume” radio button. We’ll set a name for our volume in the “iSCSI target name” field, then click “Create volume”.

Create new empty Volume
Create new empty Volume

The wizard automatically brings us to the “Configure CHAP authentication” window. We can skip this configuration if it’s safe for us to accept connections from any iSCSI initiator. If we want to be more accurate, we can add the list of iSCSI initiators authorized to mount this volume, with a shared secret.

Configure CHAP authentication
Configure CHAP authentication

Our new iSCSI volume is now ready to be mounted inside a Guest Operating System running in a VM hosted in VMC. We will use the Target Name (iqn), Host IP and Host port to connect to the iSCSI Target exposed by the Volume Gateway VM, and we will then discover the available volume and mount it. Take note of the Host IP as we are going to use it in a moment.

iSCSI Volume ready
iSCSI Volume ready

Mount the iSCSI volume inside a Windows VM

Let’s now move to a Windows Server VM hosted in VMC, in which we’ll enable to ISCSI Initiator service. Open Server Manager, and from the “Tools” Menu select “iSCSI Initiator”.

Enable iSCSI Initiator
Enable iSCSI Initiator

A dialog window will inform us that the Microsoft iSCSI service is not running. Click “Yes” to enable and start the service.

Start iSCSI Service
Start iSCSI Service

Once the iSCSI Initiator service is running, we can open the iSCSI Initiator management console by selecting Start (the Windows logo) – Windows Administrative Tools – iSCSI Initiator.

Open iSCSI Initiator Management Console
Open iSCSI Initiator Management Console

In the iSCSI Initiator Management Console, move to the “Discovery” tab and click on “Discover Portal”. In the “Discovery Target Portal” window, we can enter the IP address we’ve previously noted in the AWS Console, the one assigned to the volume we’ve just created. We can leave the default TCP Port 3260 and click “OK”.

iSCSI Initiator Properties
iSCSI Initiator Properties

Switching to the “Targets” tab, the iSCSI target iqn of the previously created volume will appear in the list of discovered targets, showing as “Inactive”. This means that we can reach the iSCSI target exposed by the Volume Gateway VM. We must click on “Connect” to actually connecting to the volume.

Discovered iSCSI Target
Discovered iSCSI Target

In the “Connect To Target” window we can accept the default settings and click “OK”.

Connect to iSCSI Target
Connect to iSCSI Target

The discovered target will now be shown as “Connected”. At this point, we are able to mount the volume and create a File System on it.

iSCSI Target Connected
iSCSI Target Connected

The iSCSI Initiator Management Console can be safely closed.
To create a new volume based on the iSCSI device we just discovered, we must open the Disk Management Console. Right click on “Start” and select “Disk Management”.

Disk Management
Disk Management

In the Disk Management Console, click on the “Actions” menu, then on “Rescan Disks”. This will rescan the storage subsystem at the VM’s Operating System level, and will detect any new attached device, such as the volume “presented” by the Volume Gateway VM via iSCSI protocol.

Disk Management - Rescan Disks
Disk Management – Rescan Disks

The iSCSI volume will appear in the list of available disks and it will appear as “Offline”. We can assume it’s the right volume looking at its size, exactly 10 GB as we originally create it in the AWS Console. We must complete some additional steps to make the disk available to our Windows users or applications to host their data.

New Disk - Offline
New Disk – Offline

As a first step, we must bring online the disk. Right Click on it and select “Online”.

New Disk - Online
New Disk – Online

The second step in to initialize the disk to make it usable by Windows. Right click on it and select “Initialize Disk”.

In the “Initialize Disk” window, ensure the disk is selected and choose a partition style option based on your requirements. As we only need a single partition and our volume is quite small, in this scenario it’s safe to leave the MBR (Master Boot Record) option selected. You can read more here about the GPT partition style, and here about the MBR partition style. When done, click on “OK”.

Initialize Disk
Initialize Disk

Now that our disk in online and initialized, we must create a volume on it, formatted with a File System supported by Windows. Right click on the disk we’ve just initialized and select “New Simple Volume…”

New Simple Volume...
New Simple Volume…

We are presented with the “New Simple Volume Wizard” welcome page, here we can click on “Next”.

New Simple Volume Wizard - Welcome Page
New Simple Volume Wizard – Welcome Page

In the second step of the wizard, we are required to assign a drive letter to our volume. I’m choosing “X” as the drive letter in this example. Click “Next” when done.

New Simple Volume Wizard - Assign Drive Letter
New Simple Volume Wizard – Assign Drive Letter

The third step of the wizard requires us to format our volume, we can select “NTFS” as the File System type, leaving “allocation unit size” at its default value, and choosing (optionally) a Volume Label for the volume. Additionally, it is safe to flag the “Perform a quick format” checkbox. When done, click on “Next”.

New Simple Volume Wizard - Format Partition
New Simple Volume Wizard – Format Partition

On the “Completing the New Simple Volume Wizard” page, click “Finish” to complete the creation of the new volume.

New Simple Volume Wizard - Finish
New Simple Volume Wizard – Finish

Our new volume will now be visible in the Windows File Explorer, highlighted by the Label and Drive Letter we set during the creation wizard. In this example, we have our “X:” drive labelled as “FileShare”.

New Volume available in Windows
New Volume available in Windows

Configure vSAN Policy for the Volume Gateway VM

The last step we should make to follow AWS best practices is to reserve all disk space for the Volume Gateway cache and upload buffer disks. AWS recommends to create cache and upload buffer disks with Thick Provisioned format. As we are leveraging vSAN in VMC we don’t have Thick Provisioning available in the traditional sense. We must use Storage Policies to reserve all disk space for the disks. The first step is to go into our vSphere Client and select “Policies and Profiles” from the main Menu.

Policies and Profiles
Policies and Profiles

In the “Policies and Profiles” page, under “VM Storage Policies”, select “Create VM Storage Policy”.

Create VM Storage Policy
Create VM Storage Policy

In the “Create VM Storage Policy”, select a name for the policy and click “Next”.

Storage Policy Name and description
Storage Policy Name and description

In the “Policy Structure” window, set the flag on “Enable rules for vSAN storage”, then click “Next”.

Storage Policy Structure
Storage Policy Structure

In the vSAN window, under “Availability” configuration, we can leave the default settings and switch to the “Advanced Policy Rules” tab.

vSAN - Availability
vSAN – Availability

Once in the “Advanced Policy Rules” tab, we can change the “Object space reservation” field to “Thick provisioning”, leaving all the other fields at their defaults. Then, click “Next”.

vSAN - Advanced Policy Rules
vSAN – Advanced Policy Rules

Select the “WorkloadDatastore” and click “Next”.

Storage Compatibility
Storage Compatibility

In the next window we can review all the settings we have made and click “Finish”.

New Storage Policy - Review and Finish
New Storage Policy – Review and Finish

We can now move to our Volume Gateway Virtual Machine and select “Edit Settings…” under the “ACTIONS” Menu.

Edit Volume Gateway VM settings
Edit Volume Gateway VM settings

Under the “Virtual Hardware” tab, we can now select the hard disks we assigned to the Volume Gateway as the cache and upload buffer volumes, and assign these the newly created Storage Policy. Once done, click “OK”. This will pre-assign all the configured disk space to both disks, replacing the default thin provisioning based policy.

Assign vSAN Policy to Volume Gateway Disks
Assign vSAN Policy to Volume Gateway Disks

One last important thing to mention is how we can save the data hosted in Volumes we create and expose through the Volume Gateway. All the options we have are available in the AWS Console, Storage Gateway services, under “Volumes”. Selecting the Volume we want to work on, under the “Actions” menu we have the option to create an on-demand backup with the AWS Backup managed service, to create a Backup plan with AWS Backup, or to create EBS snapshot (hosted in S3). Snapshots enable us to restore a Volume to a specific point in time, or to create a new Volume based on an existent snapshot.

Create EBS Snapshot
Create EBS Snapshot

This concludes this post.
We have created a Volume Gateway in VMware Cloud on AWS, delivering block disk devices based on iSCSI to our workloads, leveraging S3 as the backend storage.
A volume gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your application servers hosted in VMware Cloud on AWS.
Stay tuned for future content! #ESVR

AWS Storage Gateway for Files integration with VMware Cloud on AWS

SMB/NFS Services for Workloads in VMware Cloud on AWS

In this article I’d like to show you how the native integration between AWS Services and VMware Cloud on AWS can provide you a lot of powerful capabilities. File Services for VMware Cloud on AWS workloads is one of the most common use cases.

We can leverage AWS Storage Gateway – File Gateway to provide our workloads with SMB or NFS shares. This enables us to re-think our approach to the Cloud when it comes to migrate File Servers, store backups or leveraging services such as Athena to analyze our data once stored in S3.

The most common use cases for the File Gateway are: Online Content Repository, Backup to Cloud, Big Data-Machine Learning-Data Processing leveraging files stored in S3, Vertical Applications which creates lot of long-term-retention files.

Architecture and Service Description

In the following picture you can see the Architecture of the solution we are about to implement.
AWS Storage Gateway is a Virtual Appliance that exposes File Services to VMware workloads. It has been historically deployed on-premises, but now that we have VMware Cloud on AWS, we can take advantage of the high speed and low latency connection provided by the ENI that connects our SDDC with all the native AWS services in the Connected VPC.
The Storage Gateway Appliance will expose SMB and/or NFS shares to our workloads hosted in VMC. Frequently accessed files will be cached locally by the appliance while all other files will be stored in Amazon S3.

Simply put, we can now leverage S3 as our File Server, with the Storage Gateway Appliance exposing S3 objects in the form of files, through SMB or NFS shares, to VMware Cloud on AWS Workloads.
There’s a 1:1 mapping between a file in the share and the related object in S3, and the folder structure is preserved.
With our files in S3, we can also leverage S3 versioning, lifecycle policies and cross-region replication. We can think of a File Gateway as a file system mount on S3.
In addition, as we have the “magic” ENI-based routing connection between our SDDC and the Connected VPC in place, we don’t need to configure a Proxy Server to be able to access S3 from the Storage Gateway deployed in VMC. Routing works out of the box and is automatically managed for us between VMC and the Connected VPC.
Powerful, no?

This is the High Level Architecture of the solution we are implementing:

File Gateway in VMware Cloud on AWS - Architecture
File Gateway in VMware Cloud on AWS – Architecture

From a performance perspective, AWS recommends the following: https://docs.aws.amazon.com/storagegateway/latest/userguide/Performance.html#performance-fgw

From an high availability perspective, we can leverage vSphere HA to provide high availability for our File Gateway. You can read more here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Performance.html#vmware-ha
We’ll test vSphere HA with File Gateway later, during the deployment wizard.

Preliminary Steps

Get the VPC Subnet and Availability Zone where the SDDC is deployed

We need to accomplish some preliminary steps to gather some information about our SDDC, that we’ll need later. In addition, we need to configure some Firewall Rules to enable communication between our SDDC and the Connected VPC where we’ll configure our Gateway Endpoint.

As a first step, we need to access our VMware Cloud Services console and access VMware Cloud on AWS.

VMware Cloud Services
VMware Cloud Services

The second step is to access our SDDC clicking on “View Details”. Alternatively, you can click on the SDDC name.

VMware Cloud on AWS SDDC
VMware Cloud on AWS SDDC

Once in our SDDC, we need to select the “Networking & Security” tab.

SDDC details
SDDC details

In the “Networking & Security” tab, we must head to the “Connected VPC” section, where we can find the VCP subnet and AZ that we did choose upon deployment of the SDDC. Our SDDC resides there, therefore every AWS service we will configure in this same AZ will not cause us any traffic charge. We need to keep note of the VPC subnet and AZ as we’ll need this information later.

SDDC Networking & Security
SDDC Networking & Security

Create SDDC Firewall Rules

The second preliminary step we need to perform is to enable bi-directional communication between our SDDC and the Connected VPC through the Compute Gateway (CGW). I’ll not go through the details of the Firewall Rules creation in this post, but simply highlight the result: for the sake of simplicity, in this example we have a rule allowing any kind of traffic from the Connected VPC Prefixes and S3 Prefixes to any destination, and vice-versa. As you can see, both rules are applied to the VPC Interface which actually is the cross-Account ENI connecting the SDDC to the Connected VPC.
If we would like to configure more granular security, we could do this leveraging the information highlighted in the AWS documentation here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Resource_Ports.html

Compute Gateway Firewall Rules
Compute Gateway Firewall Rules

Let’s now have a look at the actual implementation of the File Gateway in VMC and how it works.

Create the Storage Gateway VPC Endpoint

First, we need to access the AWS Management Console for the AWS Account linked to the VMware Cloud on AWS SDDC and select “Storage Gateway” from the AWS Services (hint: start typing in the “Find Services” field and the relevant services will be filtered for you). Make sure you are connecting to the right Region where your SDDC and Connected VPC are deployed.

AWS Management Console
AWS Management Console

If you don’t have any Storage Gateway already deployed, You will be presented with the Get Started page. Click on “Get Started” to create your Storage Gateway. (hint: if you already have one or more Storage Gateways deployed, simply click on “Create Gateway” in the landing page for the service).

AWS Storage Gateway - Getting Started Page
AWS Storage Gateway – Getting Started Page

You will be presented with the Create Gateway wizard. The first step is to choose the Gateway type. In this scenario, we are focusing on File Services and we will select “File Gateway”. Click “Next”.

Select Gateway Type
Select Gateway Type

The second step is to download the OVA image to be installed on our vSphere Environment in VMC. Click on “Download Image”, then click “Next”.

Download Storage Gateway Image for ESXi
Download Storage Gateway Image for ESXi

Deploy the Storage Gateway Virtual Appliance in VMware Cloud on AWS

Now that we have download the ESXi image, we’ll momentarily leave the AWS Console and move to our vSphere Client, to install the Storage Gateway Virtual Appliance. I’m assuming here that we have the VMware Cloud on AWS SDDC already deployed and we have access to our vCenter in the Cloud. SDDC deployment is covered in detail in one of my previous posts here:https://www.esvr.cloud/2018/08/10/vmware-cloud-on-aws-lets-create-our-first-vmware-sddc-on-aws/
Head to the Inventory Object where you want to deploy the Virtual Appliance (e.g. Compute-ResourcePool), right click and select “Deploy OVF Template…”

Deploy OVF Template
Deploy OVF Template

Select the previously downloaded Virtual Appliance. This is named “aws-storage-gateway-latest.ova” at the time of this writing. Click “Next”.

Choose Transit Gateway OVA
Choose Transit Gateway OVA

Provide a name for the new Virtual Machine, then click “Next”.

Provide Virtual Machine Name
Provide Virtual Machine Name

Confirm the Compute Resource where you want to deploy the Virtual Appliance (e.g. Compute-ResourcePool). Then, click “Next”.

Select Compute Resource
Select Compute Resource

In the “Review details” page, click “Next”.

Deploy OVF Template - Review details
Deploy OVF Template – Review details

Select the Storage that will host our Virtual Appliance. In VMware Cloud on AWS this will be “WorkloadDatastore”. Click “Next”.

Workload Datastore
Workload Datastore

Select the destination network for the Virtual Appliance and click “Next”.

Destination Network
Destination Network

In the “Ready to Complete” window, click “Finish” to start the creation of the Storage Gateway Virtual Appliance.

Ready to complete
Ready to complete

We now have our Storage Gateway Appliance in the SDDC’s vCenter inventory. Let’s edit the VM to add some storage to be used for caching. To clarify, in addition to the 80 GB base VMDK, the Storage Gateway Appliance must have at least one additional VMDK of at least 150 GB in size. You can see all the Storage Gateway requirements here: https://docs.aws.amazon.com/storagegateway/latest/userguide/Requirements.html
Select the Storage Gateway VM, select “ACTIONS” then “Edit Settings…”.

Storage Gateway Virtual Appliance - Edit Settings
Storage Gateway Virtual Appliance – Edit Settings

In the “Edit Settings…” window, under Virtual Hardware, add a new disk device by clicking on “ADD NEW DEVICE” and selecting “Hard Disk”.

Add new device - Hard Disk
Add new device – Hard Disk

Select a size of at least 150 GB for the new disk. Then click “OK”.

Set new Hard Disk size
Set new Hard Disk size

Create VPC Endpoint for Storage Gateway

We can now switch back to the AWS Console, where we should be in the “Service Endpoint” page of the File Gateway deployment wizard. In case we’re still in the “Select Platform” window, we can simply click “Next”. As we want to have a private, direct connection between the Storage Gateway vApp and the Storage Gateway Endpoint, we will select “VPC” as our Endpoint Type. Click on the “Create a VPC endpoint” button to open a new window where we can create our endpoint.
A VPC Endpoint is a direct private connection from a VPC to a native AWS Service. With a VPC Endpoint in place, we don’t need an Internet Gateway, NAT Gateway or VPN to access AWS Services from inside our VPC, and instances in the VPC do not require public IP addresses.
A VPC Endpoint for Storage Gateway is based on the PrivateLink networking feature and it is an Interface-based (ENI) Endpoint.

Service Endpoint
Service Endpoint

In the “Create Endpoint” wizard, we have a couple of choices we must make for our Storage Gateway Endpoint: Service category will be “AWS Services”, then we’ll select the same AZ and subnet where our SDDC is deployed (note: we could select more than one AZ and subnet for better resilience of the endpoint, but we would potentially incur in cross-AZ charges and it could make no sense to have cross-AZ resiliency of the File Gateway, unless we also deploy our SDDC in a Stretched Cluster configuration between two AZs). Lastly, we can leave the default security group selected and click on “Create endpoint”.

Create Storage Gateway Endpoint
Create Storage Gateway Endpoint

Once the deployment is finished, we’ll be able to see our VPC Endpoint available in the AWS Console. You can see here that the Endpoint type is “Interface”.

VPC Endpoint in the AWS Console
VPC Endpoint in the AWS Console

We can now switch back to the File Gateway creation wizard, but before that we must take note of the IP address assigned to our Storage Endpoint. We could use either the DNS name or the IP address to configure our File Gateway, I’m choosing to use the IP address in this example, let’s see where we can find the IP address assigned to the ENI (Storage Endpoint). This is visible in the “Subnets” tab, where one ENI is created for each Subnet the VPC Endpoint is attached to.

VPC Endpoint subnet attachment
VPC Endpoint subnet attachment

We can now input the IP address of our VPC Endpoint in the Storage Gateway creation wizard. Then, click “Next”.

Service Endpoint
Service Endpoint

This brings us to the “Connect to Gateway” window. Here, we can input the IP address assigned to the Storage Gateway VM deployed in VMC. Then, click on “Connect to gateway”.

Connect to Gateway
Connect to Gateway

The next step in the wizard is to activate our Gateway. We can review the pre-compiled fields and optionally assign a Tag to our Gateway. When done, click on “Activate Gateway”.

Activate Gateway
Activate Gateway

We’ll get a confirmation message that our Storage (File) Gateway is now active. Additionally, we are presented with the local disk configuration window. In this window we must ensure that one or more disks are allocated to cache to most frequently accessed files locally on the File Gateway itself. When done, click on “Configure logging”.

Configure Cache Disk
Configure Cache Disk

In this example we are not configuring Cloudwatch logging for this File Gateway, for this reason we can leave the default of “Disable Logging”. We can now click on “Verify VMware HA” to verify that our File Gateway can be correctly protected by VMware HA. In VMC we have both VM level and Host level protection, and all the settings are already pre-configured based on best practices. In VMC, vSphere HA is perfectly configured out-of-the-box to provide high availability to our File Gateway. Let’s click on “Verify VMware HA” to actually see this in action.

Gateway Logging
Gateway Logging

We are now getting a message asking us to confirm that we want to test VMware HA and also providing us with a reminder that this step is only needed if the File Gateway is deployed on a VMware HA enabled Cluster. Click on “Verify VMware HA”.

Verify VMware HA
Verify VMware HA

This starts the HA test, simulating a failure inside the File Gateway VM causing it to be restarted by VMware HA. We are immediately notified that the test is in progress.

HA test in progress
HA test in progress

When the test completes, we are notified that it has completed successfully. We can now click on “Save and continue” to close the wizard.

HA test completed successfully
HA test completed successfully

This brings us back to the AWS Console where we can see that our File Gateway has been successfully created.

File Gateway created successfully
File Gateway created successfully

We need to take an additional step before we can actually create our first file share. Until now, we have created a Storage Gateway Endpoint and connected a File Gateway VM to it. To make the Storage Gateway Endpoint capable to route towards S3, we also have to create an S3 Endpoint in the Connected VPC. First step to create an S3 Endpoint is to move back to the AWS Console, Select VPC as the Service, and finally choose “Endpoints”.

VPC Endpoints
VPC Endpoints

Once in the VPC Endpoints windows, we can click on “Create Endpoint”.

Create Endpoint
Create Endpoint

We can now set our Endpoint parameters: select com.amazonaws.<region_code>.s3 as the Service Name, e.g. com.amazonaws.eu-central-1.s3 (hint: filtering on “S3”, it will be the only option you get). Note how the S3 Endpoint is a “Gateway” Endpoint, for this reason it’s based on route table entries.
We can accept the proposed VPC and the main route table (these are the default options). We can then click on “Create Endpoint”.
The result of this configuration is that we now have a new entry in the default routing table of the VPC, with destination the S3 prefixes list (pl-xxxxxxx) targeted to the S3 VPC Endpoint (vpce-id).

Create S3 VPC Endpoint

S3 VPC Endpoint
S3 VPC Endpoint

We could optionally manage access to the Endpoint attaching a VPC Endpoint Policy to it. We’ll leave the default option in this example.

Once we have created the S3 VPC Endpoint, still in the AWS Console we can move back to the Storage Gateway Service window. Here, we can finally create our first file share. Let’s first add the File Gateway to our Active Directory, this way we can leverage AD authentication and authorization using ACLs.
To continue and add our File Gateway to our Active Directory, select “Edit SMB settings” from the “Actions” menu.

Edit SMB settings
Edit SMB settings

Here, we’ll add the required fields about our own Active Directory Domain and then click on “Save”. Note how the Active Directory status in this phase is still “Detached”.

Active Directory parameters
Active Directory parameters

A “Join domain request sent” message will be shown and the Active Directory status will change to “Join In Progress”.

Join AD in progress
Join AD in progress

Once the File Gateway has been joined to Active Directory, the wizard will show us a “Successfully joined domain” message, and the Active Directory status will change to “Joined”.

AD successfully joined
AD successfully joined

We can now click on the “Create file share” button to create our first share.

Create file share
Create file share

In the “Configure file share settings” window, we must input the name of the S3 bucket that will host our files (note: the bucket must be already in place. Creation of the S3 bucket is out of scope for this post). Click “Next” when done.

Configure File Share settings
Configure File Share settings

In the next window we can select the S3 storage class we want to use. We can safely leave the default “S3 Standard” selected for our scenario. Additionally, we can choose if we want to create a new IAM role or use an existing one, to access our S3 bucket. When done, click “Next”.

Select S3 Storage Class
Select S3 Storage Class

In the following window, we can change SMB share settings such as authentication method, read/write access and access controls. We can accept the defaults and have Active Directory authentication, read and write access and access control managed by ACLs.
By default, all Active Directory authenticated users are granted read and write access to our share. We can edit this setting to set a more granular access control based on single users or group membership, if needed. Click “Create file share” when done.

SMB share setting
SMB share setting

Here we are. Our SMB share is ready to be used. We can now provide File Services to our workloads hosted in VMware Cloud on AWS. Let’s take note of the command line proposed in the low-end side if the window as we’ll use it momentarily to map the share to a drive letter in a Windows Virtual Machine.

File Services for VMware Cloud on AWS - Share Created
File Services for VMware Cloud on AWS – Share Created

Let’s move to a Windows VM hosted in VMC. In my case, the VM is already joined to my Active Directory domain and I’m connected using my Domain User Account credentials.
I’m using the previously copied command and pasting it in a command prompt, this will map our share to a drive letter in the Windows machine.

Share mapped to Windows drive
Share mapped to Windows drive

The command completed successfully and we now have our SMB share, delivered by the File Gateway and backed by S3.
We can access our P: drive from the Windows File Explorer and upload a file.

File uploaded to the SMB share
File uploaded to the SMB share

We can double check the File Gateway functionality by accessing the S3 bucket that is backing our share and checking if our uploaded file is there. And there is, as expected.

S3 bucket
S3 bucket

Configure vSAN Policy for the File Gateway VM

The last step we should make to follow AWS best practices is to reserve all disk space for the File Gateway cache disk(s). AWS recommends to create cache disks with Thick Provisioned format, but as we are leveraging vSAN in VMC we don’t have Thick Provisioning available in the traditional sense. We must use Storage Policies to reserve all disk space for the cache disk. The first step is to go into our vSphere Client and select “Policies and Profiles” from the main Menu.

Policies and Profiles
Policies and Profiles

In the “Policies and Profiles” page, under “VM Storage Policies”, select “Create VM Storage Policy”.

Create VM Storage Policy
Create VM Storage Policy

In the “Create VM Storage Policy”, select a name for the policy and click “Next”.

Storage Policy Name and description
Storage Policy Name and description

In the “Policy Structure” window, set the flag on “Enable rules for vSAN storage”, then click “Next”.

Storage Policy Structure
Storage Policy Structure

In the vSAN window, under “Availability” configuration, we can leave the default settings and switch to the “Advanced Policy Rules” tab.

vSAN - Availability
vSAN – Availability

Once in the “Advanced Policy Rules” tab, we can change the “Object space reservation” field to “Thick provisioning”, leaving all the other fields at their defaults. Then, click “Next”.

vSAN - Advanced Policy Rules
vSAN – Advanced Policy Rules

Select the “WorkloadDatastore” and click “Next”.

Storage Compatibility
Storage Compatibility

In the next window we can review all the settings we have made and click “Finish”.

New Storage Policy - Review and Finish
New Storage Policy – Review and Finish

We can now move to our File Gateway Virtual Machine and select “Edit Settings…” under the “ACTIONS” Menu.

Edit File Gateway VM settings
Edit File Gateway VM settings

Under the “Virtual Hardware” tab, we can now select the hard disk we assigned to the File Gateway as the cache volume, and assign the newly created Storage Policy to it. Once done, click “OK”. This will pre-assign all the configured disk space to that disk, replacing the default thin provisioning policy.

Assign new Storage Policy to cache disk
Assign new Storage Policy to cache disk

This concludes this post.
We have created a File Gateway in VMware Cloud on AWS, delivering an SMB (optionally NFS) share to our workloads, leveraging S3 as the backend storage.
In the next post, we’ll explore the Storage Gateway – Volume Gateway capabilities.
A volume gateway provides cloud-backed storage volumes that you can mount as Internet Small Computer System Interface (iSCSI) devices from your application servers hosted in VMware Cloud on AWS.
Stay tuned! #ESVR

VMware Cloud on AWS 101 – BlackBoard

Back to basics – VMware Cloud on AWS 101 on a blackboard

Finally back to blogging, and back to the basics.
In this video I use the old classic blackboard to discuss about the Hybrid Cloud challenges and how VMware Cloud on AWS can help you to overcame these challenges.
The first approach to the Cloud is often a lift and shift approach, and VMware Cloud on AWS, paired with HCX, is the solution for a real lift and shift. No downtime, no code changes, all Cloud benefits.

VMware Cloud on AWS – Let’s create our first VMware SDDC on AWS!

VMware Cloud on AWS quick overview

–edited on January, 2018 to align with some changes in the Service–

VMware Cloud on AWS has been released two years ago and has got a lot of impressive positive feedback from customers.
There are tons of official and unofficial blog posts out there explaining what the VMware Cloud on AWS service is, the advantages for customers and all the use cases, so I’ll give you just a quick overview:
VMware Cloud on AWS is a unified SDDC platform that integrates VMware vSphere, VMware vSAN and VMware NSX virtualization technologies, and will provide access to the broad range of AWS services, together with the functionality, elasticity, and security customers have come to expect from the AWS Cloud.
Integrates VMware’s flagship compute, storage, and network virtualization products (vSphere, vSAN and NSX) along with vCenter management, and optimizes it to run on next-generation, elastic, bare-metal AWS infrastructure.
The result is a complete turn-key solution that works seamlessly with both on-premises vSphere based private clouds and advanced AWS services.
The service is sold, delivered, operated and supported by VMware. The service is delivered over multiple releases with increasing use-cases, capabilities, and regions.

VMware Cloud on AWS

SDDC Creation Steps

The first step we have to do is connecting to the VMC on AWS console, pointing to the following URL https://vmc.vmware.com/console/
The landing page provides an overview of the available SDDCs (if any).

VMware Cloud on AWS

Create SDDC

To create a new SDDC, we have to click on the “Create SDDC” button.

SDDC Properties

The SDDC creation wizard starts, we must choose an AWS Region that will host the SDDC, we must give the SDDC a unique name, and we must select the number of ESXi Hosts our Cluster will be made of. The minimum number of Hosts for a production Cluster is 3. You can create a 1-node Cluster for test and demo purposes, this single Host Cluster will expire under 30 days and can be converted to a full SDDC before expiration.

VMware Cloud on AWS

Stretched Cluster option

When we Select Multi-Host for a production deployment, we can choose to have our SDDC (vSphere Cluster) hosted in a single AWS Availability Zone (one subnet) or distributed across two AZs on two different subnets (vSphere Stretched Cluster).

VMware Cloud on AWS

Connect AWS Account

The next step in the wizard is to choose an AWS Account that will be connected to the VMware Cloud account. This enables us to choose the VPC and Availability Zone(s) where we want our SDDC to be Hosted. In the case we’ll use native AWS Services, these will be charged on this AWS Account.

VMware Cloud on AWS

Choose VPC and Subnet (Availability Zone)

In the next step we must choose the VPC and Subnet that will host our SDDC.

VMware Cloud on AWS

Management Subnet CIDR

The final step of the wizard is to choose a CIDR for the Management Network. This step is optional and you can leave the default, being sure that the default CIDR doesn’t overlap with any network that will connect to the SDDC (e.g. on-premises network that will connect to the SDDC trough a VPN connection). We can now deploy the SDDC.

VMware Cloud on AWS

Check SDDC creation progress

The progress window will show. As you can see, we are going to have our 4-node SDDC ready in less than 2 hours!

VMware Cloud on AWS

New SDDC deployed

Once deployed, we’ll be able to see our brand new SDDC under the SDDCs tab in the console.

VMware Cloud on AWS

SDDC details

Clicking on “VIEW DETAILS” we can access the SDDC Summary and all the available options such as adding and removing Hosts from the Cluster or accessing the network configuration.

VMware Cloud on AWS

Add a new Host

Let’s add a new Host to our SDDC. It’s simple like clicking on “ADD HOST”. If this new Host is only needed to manage a burst in our compute power needs, we can simply remove the Host when it will not be needed anymore and we’ll have an additional charge, for the additional capacity we added, only for the time frame the additional Host existed.

VMware Cloud on AWS

Specify number of Hosts to add

We can specify how many Hosts we want to add, till the maximum supported size of 16 Hosts per Cluster.

VMware Cloud on AWS

New Host(s) addition task progress

We’ll see a task in progress for the new Host addition to the Cluster.

VMware Cloud on AWS

Expanded SDDC

After a few minutes, we’ll have our SDDC made of 5 Hosts.

VMware Cloud on AWS

Manage SDDC Networking

One we have our SDDC in place, we’ll need to manage it remotely and to configure firewall and NAT rules to publish services. This is managed in the Network tab. Once we enter the network configuration tab, the first thing we are shown is a very nice diagram that highlights the network and security configuration of our SDDC.
Here we can see the Management and Compute Gateway configuration overview and any VPN or Firewall rule we have in place.

Management Gateway

Scrolling Down we can see the Management Gateway section, where we can create and manage IPsec VPNs and Firewalling to/from the Management Network.

VMware Cloud on AWS

Compute Gateway

Under the Compute Gateway section we can create and manage IPsec VPNs, L2VPNs, Firewall Rules, NAT to/from the Compute Networks, where our workloads reside.

VMware Cloud on AWS

Direct Connect

The last section we find under the Network tab is the Direct Connect section. Here we can manage the Virtual Interfaces (vifs) in case we have a Direct Connect in place to connect our SDDC with another on-premises or Service Provider hosted environment.

VMware Cloud on AWS

Tech Support real-time CHAT

In the bottom right corner of the console you can always find the Chat button. This is a fantastic feature that enables you to have real-time support from VMware Technical Support.

VMware Cloud on AWS

SDDC Add Ons

In the Add Ons tab we can manage the available add ons to the VMware Cloud on AWS offering: Hybrid Cloud Extension and Site Recovery.
Hybrid Cloud Extension is included in the VMware Cloud on AWS offering and enables us to seamlessly migrate workloads from remote vCenters to the SDDC.
Site Recovery is a paying add on that enables our SDDC as a target for Disaster Recovery from remote vCenters.

VMware Cloud on AWS

SDDC Troubleshooting

The troubleshooting tab gives us a tool to check and validate connectivity for a selected use case.

VMware Cloud on AWS

SDDC Settings

The settings tab provides us the overview of all the main settings for the SDDC.

VMware Cloud on AWS

SDDC Support

The Support tab provides us all the information we should provide to Technical Support when needed.

VMware Cloud on AWS

This concludes the creation of our first SDDC in VMware Cloud on AWS.
In a couple of hours we can have a powerful VMware full-stack SDDC deployed in AWS, enabling us to quickly respond to a lot of use cases such as Disaster Recovery, Geo expansion and global scale, bursting.
What a great stuff!