vCloud Director Allocation Models Deep Dive

vCloud Director Allocation Models Deep Dive

vCloud Director comes with three different models to allocate resource to tenants: Reservation Pool, Allocation Pool and Pay-As-You-Go (PAYG).
Here’s a brief description of the three models:

Allocation Pool
Only a percentage of the resources you allocate are committed to the organization VDC. The system administrator controls overcommitment of capacity.

When backed by a provider vDC that has multiple resource pools, compute resources are Elastic.

Pay-As-You-Go (PAYG)
Resources are committed only when vApps are created in the organization VDC. The system administrator controls overcommitment of capacity.

When backed by a provider VDC that has multiple resource pools, compute resources are Elastic.

Reservation Pool
All the resources you allocate are committed upfront to the organization vDC. Users can control the overcommitment of capacity at any time.

 

In the following pages, I want to deep dive into any of the available allocation models.

ALLOCATION POOL (Elastic vDC)

An Elastic Allocation Pool vDC can span multiple Resource Pools, which means it can draw resources from all the Resource Pools backing a Provider vDC.
Allocation Pools are elastic by default in the latest releases of vCloud Director.
You can modify this setting by entering the Administration/System Settings/General/Miscellaneous section as a System Administrator.

Note: “Make Allocation pool Org VDCs elastic” is a general setting that apply to all Allocation Pool vDCs.

IMPORTANT In an Allocation Pool vDC, vCPU speed setting is not related to VM CPU Limit as in a PAYG vDC (details on this in the PAYG section). This setting is used to allocate CPU usage at Org vDC level, when a VM is powered on, each configured vCPU in the VM counts towards the total CPU allocation using the vCPU Speed value.

vCPU Speed in an Elastic Allocation Pool is used to control the CPU overcommitment level in the Org vDC.

vCPU Speed is also used to define CPU Reservation at Resource Pool level.

You can think about this value in this way: If I have a vDC containing “n” VMs, all with one vCPU and all requiring 100% CPU at the same time, which is the minimum acceptable amount of CPU I want each VM to receive? This is the vCPU Speed value in the Elastic Allocation Pool vDC.

Example:
10 GHz CPU Allocation on vDC
10 VM, 1 vCPU each
vCPU Speed setting = 1 GHz
If all the VMs require 100% CPU at the same time, how much GHz will they receive?
Answer is 1 GHz, the value of the vCPU Speed setting.

From a different point of view: how many vCPU you consider acceptable to deploy in a “x” GHz Pool to grant the needed performance to VMs? All those vCPUs will fight for CPU in this space.

Example:
10 GHz CPU Allocation on vDC

  • vCPU Speed setting = 1 GHz
  • Deployable vCPUs in the vDC = 10
  • vCPU Speed setting = 0.5 GHz
    • Deployable vCPUs in the vDC = 20
  • vCPU Speed setting = 2 GHz
    • Deployable vCPUs in the vDC = 5

 

Lets deep dive into the Elastic Allocation Pool
I start with the creation of an Allocation Pool vDC with the following settings.
Name: Test_Elastic_2
CPU allocation: 4 GHz

CPU resources guaranteed: 50%
vCPU speed: 2 GHz
Memory allocation: 4 GB
Memory resources guaranteed: 50%

This vDC maps to a vSphere Resource Pool with these settings:

As you can see, no Reservation or Limits are set and the Reservation type is “Expandable” for RAM and “Not Expandable” for CPU.

To grant the needed elasticity to the vDC, no resource allocation is made upfront in the Resource Pool backing the vDC. Resource allocation is managed dynamically based on powered on VMs.

For a deep dive on what expandable and not expandable means on Resource Pools, I point you to this Frank Denneman post: http://frankdenneman.nl/2013/02/12/expandable-reservation-on-resource-pools-how-does-it-work/

I create now a vApp containing a single VM with the following parameters.
Name: TTY-1
CPU: 1 vCPU
RAM: 1 GB

Upon vApp creation, resources allocation in vSphere for the VM (still Powered Off) has the following settings:

I power on the TTY-1 vApp.
After the VM powers on, vSphere Resource Pool Test_Elastic_2 gets the following settings.
CPU Limit: 4,000 MHz (CPU Allocation on the vDC)
CPU Reservation: 1,000 MHz (50% of the Configured vCPU Speed x total number of vCPU configured on powered on VMs)
Memory Limit: Unlimited
Memory Reservation: 512 MB (50% of Allocated Memory of 1 GB for the only powered on VM)

CPU Reservation is: 50% of the Configured vCPU Speed x total number of vCPU in use (powered on VMs)
In this case, I have 1 Powered on VM with 1 vCPU, so 2 GHz x 1 vCPU x 50% = 1 GHz

Memory Reservation is: 50% of Allocated Memory of 1 GB for the only powered on VM.
In this case, I have 50% * 1 GB = 512 MB

When I powered on the vApp, the resource allocation at vDC level has been updated as follows.

With 1 VM powered on, CPU Allocation Used has been updated with the value of the vCPU Speed setting of 2 GHz.

To demonstrate that the CPU Speed value setting on the vDC doesn’t impose limits on CPU usage for VMs, I use the following scenario.
The VM sees the value of a full CPU Core available, as shown using the command “cat /proc/cpuinfo” in TTY-1:

I Create a 100% CPU usage on TTY-1 using the following command “dd if=/dev/zero of=/dev/null”
The result is on the following screenshot showing the Realtime CPU Usage for the VM.

As no other VMs are powered on, the only powered on VM can get a full CPU Core.

As stated before, vCPU Speed setting on Allocation Pool vDC in elastic mode is not the VM CPU Limit as in PAYG.

Next step:
I create 2 Clones of TTY-1, TTY-2 and TTY-3 with the same settings as TTY-1.
No changes are made in Resource Allocation in vSphere at this time because the new vApps are still powered off.
I power on TTY-2.
Here’s the result at vSphere Resource Pool Level.

After the second vApp has been powered on, CPU Reservation setting for the vSphere Resource Pool Test_Elastic_2 has been modified accordingly with the following formula:
<sum of all powered on VMs vCPU> * <vCPU Speed Setting in org vDC> * <%CPU Reserved in org vDC> = CPU Reservation
This means: 2 vCPU * 2 GHz * 50% = 2 GHz

For the memory, the Reservation Value is the sum of all VM level reservations, based on Org vDC setting.
This is the formula:
<sum of all powered on VMs RAM> * <% RAM Reserved in org vDC> = RAM Reservation
This means: 2 GB * 50% = 1 GB

If I now try to power on TTY-3, I receive an error message:

The error message is telling me that I’ve running out of CPU resources.
That’s because, at vCloud Director level, in the org vDC, there’s no more room in the Resource Pool.

Remember: “CPU allocation used” value is updated based on the number of vCPUs configured on powered on VMs. In this case, I have 2 powered on VMs and each VM has only CPU. The configured vCPU speed value at the Org vDC level is 2 GHz so I have: 2 vCPUs * 2 GHz = 4 GHz.

The error message I get is about CPU because I have 4 GHz CPU allocation used out of 4 GHz CPU allocation, so I’ve exhausted the space for additional vCPUs deployed in the vDC.

I Create now a 100% CPU usage on both TTY-1 and TTY-2 using the command “dd if=/dev/zero of=/dev/null”
Here’s the result:

TTY-1

TTY-2

Based on the startup order, each VM could do a peak above 2 GHz, till available room is exhausted, then the allocated CPU on the Resource Pool is divided between all VMs based on Shares.

If you look at the performance graphs, TTY-1 started at Full Core, as it was the only running VM at first. When TTY-2 powered on and reclaimed CPU (both VMs needs 100% of a CORE), the 4 GHz CPU allocated for the vDC has been divided equally between the 2 VMs, 2 GHz each (remember that the two VMs have the same values of vCPUs and RAM, so identical Shares value).

I want now to be able to power on the third vApp without increasing the overall CPU allocation for the vDC Test_Elastic_vDC, so I lower the vCPU Speed setting on the Organization vDC to 1.3 GHz to create room for more CPU.

In other words, in a 4 GHz CPU allocation with CPU Speed value of 2 GHz, I can power on one or two VMs for a total of 2 vCPU. In a 4 GHz allocation with CPU Speed value of 1.3 GHz, I can power on 1, 2 or 3 VMs for a total of 3 vCPUs.

After this change, I’m now able to power on TTY-3.

Note the value of CPU allocation used is changed to 3.90 GHz which means 1.3 GHz * 3.

The vSphere Resource Pool is changed accordingly.

The value of CPU Reservation on the Resource Pool is 1,950 MHz.
Remember the formula:
<sum of all powered on VMs vCPU> * <vCPU Speed Setting in org vDC> * <%CPU Reserved in org vDC> = CPU Reservation
3 vCPUs * 1.3 GHz * 50% = 1,950 MHz

The value of Memory Reservation on the Resource Pool is 1,536 MB.
Remember the formula:
<sum of all powered on VMs RAM> * <% RAM Reserved in org vDC> = RAM Reservation
3 GB * 50% = 1.5 GB

 

Summary of Elastic Allocation Pool

The benefits of the Allocation Pool model are that a Virtual Machine can take advantage of the resources of an idle Virtual Machine on the same sub Resource Pool. This model can take advantage of new resources added to the provider vDC.

Setting Value Notes
CPU Reservation on Resource Pool SUM (CPU Guaranteed * vCPU Speed * # of vCPUs) Only Powered on VMs are counted
CPU Limit on Resource Pool Org vDC Allocation  
RAM Reservation on Resource Pool SUM (RAM Guaranteed * vRAM) Expandable RP, Only Powered on VMs are counted
RAM Limit on Resource Pool Unlimited  
CPU Reservation on VM 0  
CPU Limit on VM Unlimited  
RAM Reservation VM 0  
RAM Limit on VM Unlimited  
vCPU Speed Used to define how many vCPU can be deployed in the vDC Minimum value 0,26 GHz

 

 RAM Admission Control: vCloud Director

CPU Admission Control: vCloud Director

RAM Admission Control made by vCloud Director let the Tenant deploy VMs that consume the full RAM allocation of the vDC. Memory overhead is charged to the Service Provider.

Example: if the Tenant has 16 GB RAM in use out of 20 GB allocation on the vDC, he will be able to power on a VM with 4 GB RAM because memory overhead will be controlled by the dynamic configuration made at vSphere level under the vCloud Director control.

 

 


 

 

ALLOCATION POOL (non-elastic vDC)

For this test, I change the “Make Allocation pool Org VDCs elastic” setting, making all Allocation Pool vDCs non-elastic.
Non-elastic means that the vDC can draw resources only from a single Resource Pool (typically a vSphere Cluster).

I start with the creation of an Allocation Pool vDC with the following settings.
Name: Test_Allocation_noElastic
CPU allocation: 3 GHz
CPU resources guaranteed: 50%
vCPU speed: not available in non-elastic Allocation Pools
Memory allocation: 3 GB
Memory resources guaranteed: 50%

This maps to a Resource Pool in vSphere with these settings.

Given the static nature of this kind of vDC, the Resource Pool is created with Reservations and Limits settings made upfront and correspondent to the Org vDC settings for Allocation and Guaranteed resources. Reservation Type is not Expandable for both Memory and CPU.

I create a vApp containing a single VM with the following settings.
Name: TTY-1
CPU: 1 vCPU
RAM: 1 GB

No VM level resource allocation settings are made in vSphere as soon as the VM is powered off.

I create TTY-2, TTY-3 and TTY-4 with these settings:
CPU: 1 vCPU
RAM: 2 GB

I power on TTY-1.

Results:
No Changes at Resource Pool level are made. This is because no elasticity must be provided and the Resource Pool already has the needed settings to meet the SLA.

Memory Reservation at 50% of the allocation is set at VM level in vSphere at power on.
50% VM Memory Reservation is carved out from Resource Pool’s Reserved Memory.

TTY-1 vCPU is charged at 100%

As there are no other VMs running, the only vCPU configured in TTY-1 VM could use a full CPU Core, as shown in the following screenshot.

If you look at Virtual Datacenter settings, you can discover that “CPU allocation used” is 0 GHz even if you have CPU in use, Memory Allocation instead is shown at the value of TTY-1 Memory Allocation.

Note: in a non-Elastic Allocation Pool vDC, the “CPU allocation used” value shown is always “0”. The reason? In this allocation model vCloud Director doesn’t do Admission Control for CPU.

I power on TTY-2.

As before, no Changes at vSphere Resource Pool level are made.

Memory Reservation at 50% of allocation is set at VM level in vSphere.
50% VM Memory Reservation is carved out from Resource Pool’s Reserved Memory

No CPU allocation is shown on Org vDC, Memory Allocation instead is now shown at the value of TTY-1 Memory Allocation + TTY-2 Memory Allocation

I try to Power On TTY-3 but I receive the following error message

There isn’t enough RAM Reservation!

The reason is that in a non-elastic Allocation Pool the memory overhead is charged to the Tenant and it’s vSphere that control if a VM can or cannot be powered on. In this case, if you count 1 GB (TTY-1) + 1 GB (TTY-1) + overhead, it’s clear that 1 GB of RAM for TTY-3 is not available.

I lower TTY-3 configured RAM to 512 MB.
I can now Power On TTY-3 with this new value.
Here’s resource allocation at Resource Pool Level

I try now the following:
Configure TTY-3 and TTY-4 with these settings:
4 vCPU
32 MB RAM

vCD lets me Power On the vApps. The question is: vCD controls Memory Allocation, and let or let me not power on a vApp based on available Reserved Memory. What about CPU?
Based on my configuration, powering on TTY-3 and TTY-4 with the new settings, I have a total of 10 vCPUs configured on all the VMs.
There’s no control about the number of VMs you could power on, relating to CPU. But in the case you have more than 3 GHz of CPU allocated, that’s how much CPU I’ve allocated at Org vDC level in this example, the VMs will fight for CPU up to the 3 GHz limit set on the vSphere Resource Pool.

You can Power on new vApps until there will be available Reserved RAM.

Remember that minimum RAM Reservation is 20%, that’s why it’s impossible to have unlimited VM deployment in an Allocation Pool. The number of deployable VM is bigger as smallest is the amount of RAM allocated to them.

 

Summary of non-elastic Allocation Pool

Setting Value Notes
CPU Limit on Resource Pool CPU Allocation on vDC  
CPU Reservation on Resource Pool %Guaranteed of CPU Allocation on vDC  
RAM Limit on Resource Pool RAM Allocation on vDC  
RAM Reservation on Resource Pool %Guaranteed of RAM Allocation Minimum value is 20%
CPU Limit on VM Unlimited  
CPU Reservation on VM 0  
RAM Limit VM Unlimited  
RAM Reservation on VM %Guaranteed of configured vRAM + overhead  
vCPU Speed N/A  

 

RAM Admission Control: vSphere.

CPU Admission Control: none.

RAM Admission Control made by vSphere doesn’t allow the Tenant to deploy VMs that consume the full RAM allocation of the vDC. Memory overhead is charged to the Tenant.

Example: if the Tenant has 16 GB RAM in use out of 20 GB allocation on the vDC, he won’t be able to power on a VM with 4 GB RAM because 4 GB + memory overhead will exceed the Org vDC allocation.

 

 


 

 

RESERVATION POOL

I start with the creation of an Allocation Pool vDC with the following settings.
Name: Test_Reservation_Pool
CPU allocation: 5 GHz
CPU resources guaranteed: not available in Reservation Pools
vCPU speed: not available in Reservation Pools
Memory allocation: 5 GB
Memory resources guaranteed: not available in Reservation Pools

This VDC maps to the following vSphere Resource Pool:

In a Reservation Pool, the vSphere Resource Pool is created with Reservation = Limit. Both RAM and CPU are not expandable.

This means that all the resources allocation is made upfront. All the CPU and Memory resources allocated in a Reservation Pool are immediately and always available to the tenant, even if no VMs are powered on in the vDC. From a vCloud Director perspective, the allocated RAM and CPU are immediately subtracted from the Provider vDC and will never be available to other tenants (no overcommitment is possible from a Service Provider perspective).

I create a new vApp named TTY-1 with:
CPU: 1 vCPU
RAM: 1 GB

The vApp contains a single VM also named TTY-1, note that in this specific scenario vCD allows me to change Shares, Reservation and Limit values for both CPU and RAM configured for TTY-1, from the UI:

This is a key feature of Reservation Pool: it’s the Tenant that controls overcommitment!

In its vDC, controlling the resource allocation on its VMs, the Tenant can manage in a very granular way the “relative importance” of each VM. Assigning different Reservation, Shares, Limits, VMs with different SLAs requirements can coexist in the same vDC with no issues.

If the Tenant doesn’t configure Reservation or Limits setting on none of its VMs, he/she can potentially create an infinite number of VMs. But pay attention here: the Tenant cannot consume more than the entitled allocation for its vDC, so all the VM created in its vDC without Reservation will fight for CPU and/or Memory in the limit of CPU and RAM granted (reservation=limit) to the Resource Pool.

Scenario: I set 1 GHz CPU Reservation and 512 MB Memory Reservation on the TTY-1 VM.

This maps to the following settings in vSphere:

No changes are reported on Org vDC until vApps are powered on.

I power on TTY-1 vApp and the Org vDC allocation changes accordingly to VM Resource Allocation I’ve made:

I create 100% CPU usage on TTY-1, it uses a full Core:

I create now four new vApps: TTY-2, TTY-3, TTY-4, TTY-5 with the following settings:
CPU: 1 vCPU (500 MHz Reserved)
RAM: 1 GB RAM (512 MB Reserved)

No changes are reported on Org vDC while vApps are powered off.

I power on all the vApps, reservation used at Org vDC level changes according to this:

As far as I’ve powered on all vApps, I’m using 5 vCPU. Those 5 vCPU have a sum of 3 GHz reserved, then fight (shares apply here) for the remaining 2 GHz, till a maximum of 5 GHz that’s the limit in the Pool.

Based on this fact, TTY-1 CPU Usage goes down to a 1 GHz, as you can see in the following screenshot:

The reason why TTY-1 gets only 1 GHz of CPU and no more than this is that it is configured with 1 vCPU and all VMs have the same share value:

I create another vApp, TTY-6, with the following settings:
4 vCPU (I’d like to have 2.1 GHz Reserved)
1 GB RAM (512 MB Reserved)

When I try to do this setting, here’s the result:

The red rectangle is exposed by vCD that is alerting me that I’ve only 2,000 MHz available, in fact if I lower the value the red alert disappears and vCD lets me proceed.

The number of allocated vCPUs or allocated Memory doesn’t matter in this case, because tenants can overcommit resources, but the sum of all Reserved resources on all powered on VMs could not exceed the Org vDC allocation.

To demonstrate this, I change TTY6 configuration to:
4 vCPU (2,000 MHz Reserved)
4 GB RAM (512 MB Reserved)

I set these values and Power On TTY-6:
CPU Reservation = 2,000 MHz
RAM Reservation = 512 MB

Then I create a 100% CPU Usage on TTY-6 (all other VMs are still running at 100% CPU usage)

The total CPUs in use become 9, that fight above the reserved value to reach their allocation, because they all need a full core as they are running at 100% CPU Usage now.

TTY-6 has 4 vCPUs, all other VMs have 1 vCPU each, so shares apply this way:

So, TTY-6 gets this CPU value:

All other VMs gets this value (all the same because they have the same share value and are all running at 100% usage):

 

Summary of Reservation Pool

All the resources you allocate are immediately committed to the Org vDC.
Users in the organization can control overcommitment by specifying reservation, limit, and priority settings for individual virtual machines.

Setting Value Notes
CPU Limit on Resource Pool CPU Allocation on vDC  
CPU Reservation on Resource Pool CPU Allocation on vDC  
RAM Limit on Resource Pool RAM Allocation on vDC  
RAM Reservation on Resource Pool RAM Allocation on vDC  
CPU Limit on VM User defined  
CPU Reservation on VM User defined  
RAM Limit VM User defined  
RAM Reservation on VM User defined  
vCPU Speed N/A  

 

RAM Admission Control: vSphere

CPU Admission Control: vSphere

 

 


 

 

PAYG (Pay As You Go)

I start with the creation of a PAYG vDC with the following settings.
Name: Test_PAYG
CPU allocation: 3 GHz
CPU resources guaranteed: 20%
vCPU speed: 1 GHz
Memory allocation: 3 GB
Memory resources guaranteed: 20%

IMPORTANT In a PAYG vDC, the vCPU speed setting is used to set a VM CPU Limit. Each vCPU powered on in a PAYG vDC will have a CPU Limit enforced at the value configured in the vDC settings.

The new vDC maps to a vSphere Resource Pool with these settings:

No settings are done at Resource Pool level in vSphere.
This means that no resources are taken upfront until no VMs are powered on.

I create a vApp containing a single VM with the following parameters.
Name: TTY-1
CPU: 1 vCPU
RAM: 1 GB

Upon vApp creation, resources allocation in vSphere for the VM (still Powered Off) has the following settings:

As you can see, no settings are done at VM level in vSphere until the VM is powered on.
I power on the TTY-1 vApp.
After the VM powers on, its values are updated in vSphere as follows.

CPU Limit: 1,000 MHz (vCPU speed value set in vDC configuration)
CPU Reservation: 200 MHz (20% of vCPU speed, is the value of guaranteed CPU set in vDC configuration)
Memory Limit: 1,024 MB (value of RAM allocation for the VM)
Memory Reservation: 204 MB (20% of allocated RAM, is the value of guaranteed RAM set in vDC configuration)

As you can see, in the PAYG Allocation Model all the values set in the Org vDC configuration are proportionally applied to each powered on VM in the vDC.

Following the changes made at vSphere Resource Pool level. The Resource Pool is expandable for both CPU and RAM.

Values at Org vDC level have changed according to TTY-1 being powered on and using resources.

 

Org vDC calculates “CPU allocation used” as:
Total number of vCPUs of all powered on VMs * vCPU Speed
In this case: 1 VM * 1 vCPU * 1 GHz = 1 GHz

Org vDC calculate “Memory allocation used” as: Sum of configured memory of all powered on VMs
In this case: 1 VM * 1 GB = 1 GB

I create a 100% CPU usage on TTY-1.

CPU usage at 100% is 1,000 MHz, the configured value of vCPU Speed:

I create and power on two new vApps, named TTY-2 and TTY-3, both containing one VM each and with the same settings as TTY-1:
CPU: 1 vCPU
RAM: 1 GB

Resulting settings at vSphere Resource Pool level are:

Resulting settings at the Org vDC level are:

Given these settings, the vDC is filled and I cannot power on additional vApps.

 

Summary of PAYG

Resources committed to the organization are applied at the virtual machine level.
The benefit of the pay-as-you-go model is that it can take advantage of new resources added to the Provider vDC.
In the Pay-As-You-Go model, no resources are reserved ahead of time, so a virtual machine might fail to power on if there aren’t enough resources. Virtual Machines operating under this model cannot take advantage of the resources of idle Virtual Machines on the same sub Resource Pool, because resources are set at the virtual machine level.

Setting Value Notes
CPU Limit on Resource Pool Unlimited  
CPU Reservation on Resource Pool None Expandable RP
RAM Limit on Resource Pool Unlimited  
RAM Reservation on Resource Pool None Expandable RP
CPU Limit on VM vCPU Speed * # vCPUs  
CPU Reservation on VM (%Guarantee * # vCPUs) of vCPU Speed  
RAM Limit VM vRAM  
RAM Reservation on VM %Guarantee of vCPU Speed + overhead  
vCPU Speed User defined Minimum value is 0.26 GHz

 

RAM Admission Control: vCloud Director

CPU Admission Control: vCloud Director

 

3 thoughts on “vCloud Director Allocation Models Deep Dive

  1. This is the best explanation Ive come across and i really appreciate the effort you went to to publish this for our benefit. Cheers!

  2. Ditto to the previous comment. This post obviously took a lot of time to get together and does an EXCELLENT job at relaying the information. Kudos to you and thanks for sharing!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.