EHC 4.1.1 Scalability and Maximums

Architecting large scale cloud solutions using VMware products have several maximums and limits when it comes to scalability of the different components. People tend to look at only vSphere limits, but the cloud also has several other systems with different kind of limits to consider. In addition to vSphere, we have limits with NSX, vRO, vRA, vROps and the underlying storage. Even some of the management packs for vROps have limitations that can affect large scale clouds. Taking everything into consideration requires quite a lot of tech manual surfing to get all the limitations together. Let’s inspect a maxed out EHC 4.1.1 configuration and see where the limitations are.

EHC_scalability_larger

The above design contains pretty much everything you can throw a VMware based cloud design at. The design is based on EHC, so some limitations are from internal design choices, but almost all of the limitations are relevant to all VMware clouds.

Start from the top. vRA 6.x and 7.1 can handle 50 000 VMs. vRA 7.3 goes up to 75 000 VMs, but EHC 4.1.1 uses vRA 7.1. Enough for you? Things are not that rosy I’m afraid. Yes, you can add 50 000 VMs under vRA management and it will work. It’s the underlying infrastructure that is going to cause you some grey hair. Even vRealize Orchestrator cannot support 50 000 VMs. Instead, it can handle either 35 000 VMs in standalone install, or 30 000 VMs in a cluster mode with 2-nodes. Cluster mode is the standard with EHC, so for our design, 30 000 VMs is a limitation. This limit only affects if your VMs are under vRO’s management, for example they are utilizing EHC Backup-as-a-Service. You could have VMs outside of vRO of course, so in theory you could also achieve the max VM count. Additional vRO server is an option, but for EHC, we use only one instance for our orchestration needs. Anything beyond that is outside of the scope of EHC.

Next let’s look at the vCenter blocks of our design. A single vCenter can go up to 10 000 powered on VMs, 15 000 in total. So just slap 5 of those under vRA and good to go, right? Wrong! There are plenty of other limiting factors like 2048 powered on VMs per datastore with VMware HA turned on, but also things like 1000 VMs per ESXi host and 64 hosts per cluster. These usually won’t be a problem. With EHC, you can have maximum of 4 vCenters with full EHC Services and 6 vCenters outside of EHC Services. You can max out vRA, but you can only have EHC Services for 40 000 VMs. When we take vRO into account, the limit drops to 30 000 VM. You can still have 20 000 VMs outside of these services on other vCenters, no problem.

vCenter_block

Inside the vCenter block we have other components besides just vCenter. NSX follows the vSphere 6 limits, so it doesn’t cause any issues. NSX Manager is mapped 1:1 with vCenter, so single vCenter limits apply. You can add multiple vCenters to vRA, so overall limit will not be lowered by NSX. In addition to NSX, we have two collectors for monitoring, Log Insight Forwarder and vROps Remote Collector. Both have some limitations, but they don’t affect the 10 000 VM limit for the block.

As always, storage is a big part of infrastructure design. Depending on your underlying array and replication method, you might not achieve the full 10 000 VMs from vCenter. For example, vSAN can only have one datastore per cluster. As said before, with the combination of HA, the limit per cluster is 2048 powered on VMs with older vSAN versions. However, this limit doesn’t apply to vSAN 6.x anymore. Now the maximum for vSAN cluster is 6400 VMs, and all can be powered on. You can also have only 200 VMs per host with vSAN based solutions. On a normal cluster the limit is 1024. If you use a vSAN based appliance such as Dell EMC VxRail, the vCenter limit drops to 6400 VMs since you can only have one cluster and one datastore.

vCenter_with_VxRail

You most likely want to protect your VMs across sites. There are two methods for this with EHC: Continuous Availability (aka VPLEX/vMSC) and Disaster Recovery (aka RP4VM). The first option, EHC CA, doesn’t limit your vCenter maximum. VPLEX follows vCenter limits the same way as NSX does. EHC supports 4 vCenters with VPLEX, so that brings the total of CA protected VMs to 40 000 VMs. Again, vRO limits your options a bit to 30 000 VMs, and yes, you can have VMs outside of VPLEX protection in a separate cluster and separate vCenters. You could have 4 vCenters with 30 000 protected VMs in total with VPLEX, and on top of that 20 000 VMs outside of EHC.

vCenter_with_VPLEX

For EHC DR, the go-to option is to use RecoverPoint for VMs. RP4VM does not use VMware SRM, but it has its own limits. The maximum for a vCenter pair is 2048 VMs with RP4VM 4.3. These limits will grow with the upcoming RP4VM 5.1 release later this year. You can have two vCenter pairs in EHC with RP4VM, so then the total protected VMs would be 4096. You can have both replicated and non-replicated VMs in the same cluster, so the overall limit is not affected beyond vRO. We do support physical RecoverPoint appliances with VMware SRM as well. SRM can support up to 5000 VMs, and you can use SRM in 1 vCenter pair only. You can have non-replicated clusters with replicated ones, so the overall limit can still be high. With the combination of RP4VM and SRM, you could have up to 7048 protected VMs between 2 vCenters and 2048 protected VMs between 2 other vCenters, so in total 9096 DR protected VMs in the system.

vCenter_with_DR

In addition to replication, backup is crucial as well. Backup design can have interesting side affects. Avamar doesn’t have a fixed VM limit, since ingesting backup data doesn’t have much to do with VM count, but data change rate does. Backup system limit has to be calculated using backup window, amount of backup proxies and the data change rate. You can have up to 48 proxies associated with an Avamar grid. Each proxy can backup/restore simultaneously 8 VMs, so total is 384 VMs. This limit is not fixed, but it’s not recommended to change it. So any given moment, you can backup 384 VMs. If your backup windows is 8 hours, and 1 VM takes 10 minutes to backup, your maximum is 18432 VMs inside the backup window (assuming all 384 VMs start and finish during 10 minutes). There’s a lot of assumptions in the calculations, so be careful when designing the backup infrastructure. You can obviously have many Avamar grids if needed.

Avamar

If you thought that was complex, wait until we get to the monitoring block. You wouldn’t think that monitoring is a limiting factor, but you would be wrong. There are some interesting caveats that should be at least known and taken into consideration. Obviously the platform limits are what really counts, but monitoring is a huge part of a working cloud environment. Log Insight doesn’t really have VM limitations. It only cares about incoming events (Events Per Second, EPS). There is a calculator out there to help with the sizing. You can connect up to 10 vCenters, 10 Forwarders, 1 vROps and 1 AD among other things to a single Log Insight instance. Our design uses Log Insight Forwarders to gather data from vCenters and ship it to a main cluster.

Monitoring

vROps is another matter. Whereas the vROps cluster can ingest huge amounts of VMs (120 000 objects with maximum config), the Management Packs can become a bottle neck. vRealize Automation Management Pack can handle 8000 VMs when using vRA 7.x, and 1000 VMs with vRA 6.2. That’s quite a lot less than the 50 000 VMs vRA can support. It would be nice to have all these VMs monitored, right? NSX Management Pack also has a limitation of just 2000 NSX objects, but they also say that this is the testing limit and it will work beyond 2000 VMs and 300 edges. This is probably true with vRA Management Pack as well, but it is not stated in the docs.

Finally, vRealize Business for Cloud adds another limit to the mix. It can handle up to 20 000 VMs across 4 vCenters. Again, this will limit the overall amount of VMs in the system, if all of them need to be monitored. Unfortunately there is no way to exclude some the VMs in vRA, all are monitored by vRB. You can opt out to leave some vCenters outside of vRB monitoring. Combining this limit with others in this post, the total limit comes down to 20 000 VMs, and even lower if you want them to monitored by vROps. There are ways to go beyond the limits by just not monitoring all of the vCenters or adding more VMs than is supported and taking a risk. The last part is not recommended of course.

As you can see, the limitations are all around us. You are golden up to 2000 VMs, but after that you really need to think what you need to accomplish and do some serious sizing. Well, maybe a bit before that..

EHC 4.1.1
Component VM Limitation vCenter Limitation Other Source
vCenter 6.0 U2 10 000 VMs (Powered On)
15 000 VMs (Registered)
8000 VMs per Cluster
2048 Powered On VMs on single Datastore with HA
64 ESXis per Cluster
500 ESXis per DC
1000 ESXi hosts
vSphere 6 Configuration Maximums
vRA 7.1 50 000 VMs
75 000 VMs (vRA 7.3)
1 vRO instance per tenant (XaaS limitation) EHC: 1 tenant allowed with EHC Services vRealize Automation Reference Architecture
vRO 7.1 35 000 VMs
15 000 VMs per vRO Node in Cluster Mode
30 vCenters Single SSO domain vSphere 6 Configuration Maximums
NSX 6.2.6 vCenter limits 1 vCenter per 1 NSX Manager vSphere 6 Configuration Maximums
vROps 6.2.1 120 000 Objects (with fully loaded vROps, 16 Large nodes) 50 vCenter Adapter instances
50 Remote Collectors
VMware KB 2130551
Log Insight 3.6 No VM limitations, only Events Per Second matter 10 vCenters 10 Forwarders, 1 AD, 2 DNS Servers, 1 vROps Log Insight Administration Guide
Log Insight Calculator
vSAN 6.2 200 VMs per Host
6400 VMs per Cluster
6400 Powered On VMs
64 Hosts per Cluster 1 Datastore per Cluster
1 Cluster per VxRail system
vSphere 6 Configuration Maximums
vSAN Configuration Limits
vRB for Cloud 7.1 20 000 VMs 4 vCenters vRealize Automation Administration Guide
Avamar 7.3 No fixed limit, depends on data change rate, backup windows and amount of Proxies 15 vCenters Maximum of 48 Proxies
8 concurrent backups per Proxy
Avamar 7.3 for VMware User Guide
EMC KB 411536
VPLEX / vMSC 5.5 SP1 P2 10000 Powered on VMs
15000 Registered VMs
Follows vCenter limits vSphere 6 Configuration Maximums
RecoverPoint 4.4 SP1 P1 / SRM 6.1.1 5000 VMs 1 vCenter pair allowed in EHC Can recover max 2000 VMs simultaneously VMware KB 2105500
RecoverPoint for VMs 4.3 SP1 P4 1024 individually protected VMs
2048 VMs per vCenter Pair
4096 VMs across EHC
2 vCenter Pairs in EHC
32 ESXi hosts per cluster
Recommended max 512 VMs per vSphere cluster with 4 vRPA clusters
If EHC Auto Pod is protected with RP4VM, 896 CGs left for Tenant workloads
RP4VM Scale and Performance Guide
vRA Mgmt Pack 2.2 8000 VMs (with vRA 7 / EHC 4.1.x) Mgmt pack v.2.0+ vRA Mgmt Pack Release Notes
NSX Mgmt Pack 3.5 2000 VMs
300 Edges
(will scale beyond)
Mgmt pack v.3.5+ NSX Mgmt Pack Release Notes
Advertisements

Latency Rules and Restrictions for EHC/vRA Multi-Site

EHC 4.1.1 can support up to 4 sites with 4 vCenters across those sites with full EHC capabilities. On top of that, we can connect to 6 more external vCenters without EHC capabilities (called vRA IaaS-Only endpoints). There are many things to consider when designing a multi-site solution, but one aspect is often omitted: latency. If you have two sites near each other, latency is usually not a problem. When it comes to multiple sites across continents, then we need to consider roundtrip times (RTT) between the main EHC Cloud Management Platform and remote sites very carefully. There are many components that connect over the WAN to the main instance of EHC and vice versa, and some of the components are sensitive to high latency. It’s also difficult to find exact information on what kind of latencies are tolerated. Often the manuals just state that “can be deployed in high latency environments” or something similar. Let’s try to find some common factors on how to design multi-site environments. For a quick glance of the latencies involved, scroll down to a summary table at the end of this post. For a bit more explanation, read on!

There are several different scenarios how to connect remote sites to EHC:

  1. EHC protected between 2 sites with Disaster Recovery connected with up to 2 remote sites/vCenters
  2. EHC protected between 2 sites with Continuous Availability connected with up to 3 remote sites/vCenters
  3. Single Site EHC connected with up to 3 remote sites/vCenters
  4. Single Site EHC connected with up to 3 remote sites/vCenters and up to 6 vRA IaaS-Only Sites

It’s also possible to have mix of different protection scenarios (e.g. DR+CA+Single Site), but from a latency perspective, these 4 scenarios cover all the limitations. The concept of a “site” is intentionally vague, since it can mean many things in different environments. Often 1 Site = 1 vCenter, but we don’t limit EHC like that. For simplicities sake, let’s assume that for latencies we have 1 site with 1 vCenter. Within site you would have local area network, and between sites wide area network. If you have several vCenter within a site, latency is normally not an issue since the local network is fast and with low latency.

For the first two scenarios, storage latency comes into play. The EHC components are almost identical between the different scenarios, so the differences in latency come from the storage layer. Depending on the replication technology, latency requirements can be very strict. In a pure Disaster Recovery deployment, the storage latencies can be up to 200 ms when using RecoverPoint with asynchronous replication. However, if Continuous Availability is used, then the requirement drops to under 10 ms! With Continuous Availability, we utilise vSphere Metro Storage Cluster (vMSC) for an active-active implementation of EHC. The underlying storage technology is VPLEX and depending on the setup (cross-connect or non-cross-connect), the latency needs to be under 5 ms or 10 ms.

The last two scenarios seem simple, you just hook another vCenter to EHC as a vRA endpoint and done, right? Unfortunately it goes a bit deeper than that. The diagram below shows the different components needed for a full EHC capable remote endpoint/site. Things we have to worry about when it comes to latency are VMware Platform Services Controller (PSC), vRealize Automation Agents, SMI-S Provider, Log Insight Forwarders and vRealize Operations Manager Remote Collectors. All of those components connect back to the main site and all of them have latency requirements. If NSX is part of the solutions, then also NSX Manager in the remote site will connect to the primary NSX Manager on the main site. Backup has some limitations as well, but backup replication is usually not the limiting factor.

EHC_multisite_basic

PSC is perhaps the most sensitive component of them all. There are no official hard requirements for PSC, but according to VMware engineering working with PSC, a comfortable limit for PSC is under 100 ms within the same SSO domain. If you go over that, the risk for conflicting changes increases too much. This is a very important point, because EHC requires that all the remote vCenters with full EHC capabilities are part of a single EHC SSO domain. It all comes down to the vRealize Orchestrator that provides orchestration services for the whole EHC. The SRM plugin for vRO requires that all the vCenters that are connected to it use the same SSO domain for authentication. We also want to keep all of our EHC SSO architectures the same across different implementations, so that future upgrades are easier. Since we rely on vRO for all of our orchestration needs, this becomes a limitation for multi-site. Therefore the latency needs to be under 100 ms when connecting remote sites or vCenters to EHC. Note that this applies to DR scenarios as well. Although RecoverPoint can tolerate latencies up to 200 ms, PSC cannot. Since PSC is a crucial part of the solution, it will also define the maximum latency, unless some other component require a smaller RTT.

The Log Insight Forwarders do not have a published latency requirement, but if you deploy them across a very high latency WAN, the delay can be compensated by increasing the Worker count. For vROps Remote Collectors, the latency needs to be under 200 ms. vRA Agents have a vague description on latency requirements. All that is said about them is that they “may be distributed to the geography of the endpoint“. I take it that latency is not an issue in any setup. Next component is the SMI-S Provider. It is used with Dell EMC VNX and VMAX to control the storage arrays with Dell EMC ViPR. The SMI-S Provider automates storage provisioning tasks and ViPR orchestrates them. There is a requirement of less than 150 ms latency between ViPR and SMI-S Provider.

The connection between NSX Managers does not have a published latency requirement, but the maximum is set to Cross-vCenter vMotion latency of under 150 ms in the NSX Cross-vCenter documentation. It makes sense since you should be able to do a vMotion between sites, and this feature requires the latency to be under 150 ms. The same latency limit applies to NSX Controllers. In a Cross-vCenter setup the NSX Controllers need to communicate with the remote hosts and secondary NSX Manager.

You can also use vRA IaaS-Only endpoints with EHC. These endpoints are vCenters without any EHC services available for them (e.g. Backup-as-a-Service). You can either add them to the same EHC SSO domain as the rest of the endpoints, or create a new one. If you decide to go with a disjointed SSO domain, then obviously the PSC latency limit does not apply. In this case the tolerated latency depends purely what other components are used with the remote endpoint. At minimum vRA Agents, vROps Remote Collector and Log Insight Forwarder should be there, so maximum latency would be 200 ms.

Lastly, we need to consider backup replication in all of the scenarios. If the EHC solution has Backup-as-a-Service functionality included, then we need to replicate the backup data between sites. This can be done with either Avamar replication or Data Domain replication (Avamar is the frontend for both replication methods). There are no fixed latency requirement for backup replication. It should be under 100 ms, but the products can be configured to allow a higher latencies. Anything under 100 ms can be handled with default replication settings, but anything higher, and the implementation team needs to tweak the settings.

To make this even more complex, we have to look at the primary site as well and which of those components connect to the remote sites. And on top of that, there are some external services, mainly Active Directory, that can cause headache. The primary site has two components that collect information from the remote sites, VMware vRealize Business for Cloud and Dell EMC ViPR SRM. vRB is not an issue, but ViPR SRM requires a separate Collector deployed in the remote site. The latency between ViRP SRM Backend and Collector can be up to 300 ms, but between the Collector and the vCenter/Storage, only 20 ms is acceptable.

The final thing to look at are the external services. Active Directory can cause significant delay in login times if there is a high latency between the domain controller and the remote component. EHC uses Active Directory authentication across the solution for user authentication and component integration, so it is a crucial service. It is recommended to have a local domain controller at the remote site to ensure fast login times if there is significant latency in the WAN connection.

You might also have configuration management tools in use like Puppet. There are no latency limits available for Puppet, but there are customers out there who are using a multi-Master implementation with a Master of Masters in a high latency environments without issue. You will most likely face issues with other components in the environment before Puppet becomes a problem.

The summary of all the latencies:

Component Communicates with Latency Requirement Source
VPLEX Cluster (Remote Site) VPLEX Cluster (Primary Site) < 5 ms (cross-connect)
< 10 ms (non-cross-connect)
VPLEX 5.5.x Support Matrix
PSC (Remote Site) PSC (Primary Site) < 100 ms VMware KB 2113115
Avamar Server (Remote Site) Avamar Server (Primary Site) < ~100 ms Avamar 7.3 and Data Domain System Integration Guide
Data Domain (Remote Site) Data Domain (Primary Site) < ~100 ms Avamar 7.3 and Data Domain System Integration Guide
SMI-S Provider ViPR < 150 ms ViPR Support Matrix
NSX Manager (Secondary) NSX Manager (Primary) < 150 ms NSX-V Multi-site Options and Cross-VC NSX Design Guide
vCenter (Remote Site) vCenter (Primary) < 150 ms (vMotion) VMware KB 2106949
vROps Remote Collector vROps Cluster Master Node < 200 ms VMware KB 2130551
vRPA Cluster (Remote Site) vRPA Cluster (Primary Site) < 200 ms RecoverPoint for VMs Scale And Performance Guide
RPA Cluster (Remote Site) RPA Cluster (Primary Site) < 200 ms RecoverPoint 4.4 Release Notes
ViPR SRM Collector ViPR SRM Backend < 300 ms ViPR SRM 3.7 Performance and Scalability Guidelines
vCenter (Remote Site) vRealize Business for Cloud Not specified, but latency sensitive Architecting a VMware vRealize Business Solution
vCenter (Remote Site) vRealize Orchestrator Not specified, but latency sensitive Install and Configure VMware vRealize Orchestrator
vRA Agent vRA Manager/Web Not specified, high latency ok VMware KB 2134842
Log Insight Forwarder Log Insight Cluster Not specified, high latency ok Log Insight 3.6 Documentation

vRA 7.0 Reinitiate Installation Wizard

EDIT:

Well, there’s actually a CLI command to do the steps below. Just run vcac-vami installation-wizard activate, and it does everything for you. Sounds like a clean approach to me.

vra7_re_enable_wizard_4

/EDIT

vRA 7.0 comes with a nice Installation Wizard to ease the process of getting vRA and IaaS Components running. However, if you butter finger the installation process by clicking Cancel and not really reading what vRA is trying to tell you (I did that), you cannot access the installation wizard again. It’s a manual installation after that, and I’m not going to do that anymore. So, let’s fix it.

vra7_re_enable_wizard_cancel

Log into vRA appliance using the SSH client of your choice. Navigate to /etc/vcac folder. There’s a nice little file called vami.ini. The only thing it contains is this setting:

vra7_re_enable_wizard

Jackpot! Edit the file with vi, change false to true, save the file and restart vami service:
service vami-lighttp restart

Log back to VAMI at https://fqdn_of_vra:5480, and the Installation Wizard is reinitiated. If you need to close the Wizard and you don’t want to go through this hassle again, click Logout on the upper right corner.

Bypass Traverse Checking in vRealize Automation 6.2

This week we ran into an interesting problem during a Federation Enterprise Hybrid Cloud implementation. We had the solution implemented with VMware vRealize Automation 6.2, and everything was running smoothly. The vRA implementation was done as a distributed install, so after configuration we moved to do some vRA component failover testing. We succeeded in failing over the primary component to secondary component on all of the different VMs (vRA appliance, IaaS Web, IaaS Model Manager + IaaS DEM-O, IaaS DEM-Workers and IaaS DEM-Agents), but failback was not successful. After diving into the component logs, we found a distinctive error on almost all of them:

System.Configuration.ConfigurationErrorsException: Error creating the Web Proxy specified in the 'system.net/defaultProxy' configuration section

bypass_traverse_checking
This error was on the IaaS Model Manager, DEM-O and DEM-Agents. Rest of the components failed back just fine. The symptom was that the VMware vCloud Automation Center Service and the DEM-Orchestrator Service would not start on reboot. We also could not restart them manually, because they would fail and the same error would appear in the logs. The error points to .NET call that sets a default proxy according to the web.config file found on the Windows host (Windows\Microsoft.NET\Framework\v4.0.30319\Config). These files were not modified by us, so the error did not make a lot of sense. The web.config file also exists in some of the vRA folders, so the origin of this error was unclear. It was clear, however, that the vRA code was calling to .NET function during service start, and that call failed due to a proxy error. This lead us to a wild goose chase with VMware support for a couple of days. It became clear that the security settings or the Windows image were blocking the services to start. Since the issue only occurred after rebooting the Windows VMs, GPO seemed the prime suspect. After engaging the customer Windows/Security SME, we found the root of the problem.

Our customer runs a high security environment, so their GPO settings are very strict. The vRA manuals tells to give these rights to the IaaS Service User:

"Log on as a batch job" and "Log on as a service"

We verified these settings, and everything was according to vRA requirements. However, the customer SME found out by using the Process Explorer (https://technet.microsoft.com/en-gb/sysinternals/bb896653.aspx) that the Service User needs an extra right to local privilege called Bypass Traverse Checking. The Process Explorer actually shows that the user needs a privilege called SeChangeNotifyPolicy, but that privilege also gives user the Bypass Traverse Checking. More info on that here: http://blogs.technet.com/b/markrussinovich/archive/2005/10/19/the-bypass-traverse-checking-or-is-it-the-change-notify-privilege.aspx. After giving the user the new rights, all of the services restarted!