Migrate vRA VMs Between Clusters in EHC

Moving vRA VMs around can sometimes be tricky. vRA itself doesn’t provide much tools to do VM migrations, so some operations have to happen in the vSphere layer before they can be moved in vRA. Although vRA Data Collection functions marvellously, there are some changes that require a manual intervention in vRA to make sure things are in order. Simple resource changes are easy for vRA, but when you have to migrate a VM from one cluster to another, vRA won’t like that. vRA Reservations are mapped to underlying clusters, and changing clusters ultimately means changing Reservations.

before_migration

There are three aspects that need to be looked at when migrating VMs: vSphere, vRA, and 3rd party integration (in this case EHC). We are running EHC 4.1.2 with vRA 7.3 using Backup-as-a-Service, and the question is, what we need to do to migrate the VMs?

First things first. Let’s assume we have a new cluster configured. It needs to be added to vRA Fabric Group. On top of that, it needs to be onboarded to EHC using the existing catalog items. Next we need storage. You could either create new datastores with Storage-as-a-Service, or mount the existing datastores to the cluster. If you are using Storage Reservation Policies, you need to be careful. If you have an SRP attached to your VM, you shouldn’t change the mapped SRP. You can change the datastore as long as it is using the same SRP. Turns out that the SRP is not updated during data collection, and you cannot change it in the portal either. You can Reconfigure a VM, but the dropdown for SRP doesn’t give you any options, just the one you already have. Changing SRP would required either reimporting the VM or modifying the values through vRA APIs.

SRP_change

When using EHC BaaS, we are also going to need new Avamar Proxies in the new cluster. Additionally, we need a new Reservation that maps to the new cluster. To make things easier, it would be good to use same Business Group for both Reservations/clusters. Although you can change the Business Group as well, the VM owner needs to be part of both the original and the target Business Group. Once all this is done, we can actually start moving VMs. In case there is a Reservation Policy assigned to your VM, VMware doesn’t support the change of Business Group.

Now we can migrate. In vSphere Web Client, simply migrate the VMs to the new cluster and optionally to a new datastore. All this happens online, provided of course that the VM has network connectivity, so make sure your VLANs are available on both clusters. Once the migration is done, vRA Data Collection needs to be initiated on both clusters, old and new.

migrate_vsphererun_data_collection

If the data collection happens a bit out of sync, the VM might enter a “Missing” state in vRA. Don’t worry, just run the data collection again on the target cluster and it should be fine. It just happens when you run data collection on the original cluster, but the VM has already been moved.

missing_vm.jpeg

After data collection is completed, almost everything is automatically adjusted. The cluster is updated in vRA, and if you changed that datastore, the storage path is updated as well. The crucial part, Reservation, stays in the original value, in our case Res2-BG1. We need to fix that.

wrong_reservation

Although the VM works fine in vRA, it is consuming the quota of the old Reservation, and we don’t want that. To change it, go to Infrastructure -> Managed Machines -> Hover over the VM and select Change Reservation. From the dropdown, select your new Reservation and off you go! In a few seconds, the VM is updated in the Managed Machine list in the correct Reservation. There are other options to do this. vRO has a default workflow from VMware called “Change Reservation of an IaaS Virtual Machine”. You could use this to loop over VMs and automate the whole process. Also the vRealize CloudClient has a command “vra provisioneditem change reservation”. CloudClient can be scripted as well.

change_reservationendstate

If you are dealing with regular VMs, that’s it. vRA is now in sync with vSphere. There could be some extra steps ahead, depending on the 3rd party integration you might use. The VM Custom Properties are not updated by the data collection, unless those properties were inserted by vRA. Thing like VirtualMachine.Storage.Name and VirtualMachine.Admin.Hostname are updated automatically since vRA made them during provisioning, but any other 3rd party property fields need to be updated manually after the migration if needed. Usually these are not a problem, but they should be checked anyway to keep things in sync. The property values can be modified with the Reconfigure action item in the portal, or through vRA API. In our case we are using EHC with BaaS, and there are some custom properties related to that. Fortunately as long as the Avamar Proxies are in place, VMs are in the original VM Folder and datastores are configured in Avamar, the custom properties do not need to be updated when moving VMs.

custom_properties

Advertisements

One thought on “Migrate vRA VMs Between Clusters in EHC

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s