vRA 7.0 Reinitiate Installation Wizard

EDIT:

Well, there’s actually a CLI command to do the steps below. Just run vcac-vami installation-wizard activate, and it does everything for you. Sounds like a clean approach to me.

vra7_re_enable_wizard_4

/EDIT

vRA 7.0 comes with a nice Installation Wizard to ease the process of getting vRA and IaaS Components running. However, if you butter finger the installation process by clicking Cancel and not really reading what vRA is trying to tell you (I did that), you cannot access the installation wizard again. It’s a manual installation after that, and I’m not going to do that anymore. So, let’s fix it.

Continue reading

Advertisements

ESXi 5.5 U3 with new E1000 drivers for Intel NUC

I’ve been running a home lab a while with a couple of Intel NUCs. They have been absolutely brilliant, but they do have a slight problem with network card drivers. ESXi 5.5 didn’t support the Intel 82579LM Ethernet Controller inside NUCs, so you had to create a custom ISO image with the correct drivers. Today I wanted to upgrade my old ESXi 5.5 image to the latest one (I’m prepping to give vRA 7.0 a go). To my happy surprise, VMware has included the necessary E1000 drivers in the ESXi 5.5 U3 (and newer) package! Oh joy, no more custom images!

Continue reading

Add or Upgrade Plugins in vCenter Orchestrator Cluster Mode

Configuring vCenter / vRealize Orchestrator in a cluster mode can be tricky. There are several sources of information how to do that, including the official VMware documentation (vCenter Orchestrator 5.5.2 Documentation), so it’s not that big of a problem. Upgrading the plugins in cluster mode, however, can be challenging. There’s a procedure you have to follow if you don’t want to end up in a situation where plugins keep disappearing for no good reason.

Continue reading

Bypass Traverse Checking in vRealize Automation 6.2

This week we ran into an interesting problem during a Federation Enterprise Hybrid Cloud implementation. We had the solution implemented with VMware vRealize Automation 6.2, and everything was running smoothly. The vRA implementation was done as a distributed install, so after configuration we moved to do some vRA component failover testing. We succeeded in failing over the primary component to secondary component on all of the different VMs (vRA appliance, IaaS Web, IaaS Model Manager + IaaS DEM-O, IaaS DEM-Workers and IaaS DEM-Agents), but failback was not successful. After diving into the component logs, we found a distinctive error on almost all of them:

System.Configuration.ConfigurationErrorsException: Error creating the Web Proxy specified in the 'system.net/defaultProxy' configuration section

bypass_traverse_checking Continue reading

OpenStack Juno Lab installation on top of VMware Workstation – Problems

O-oh, here we go, the problems start pouring in. After installing nova in my lab, I noticed that a few services (nova-cert, nova-conductor, nova-consoleauth, nova-scheduler) failed to start after reboot. In fact, if you check them immediately after reboot, they are started, but they fail after a while. From the logs I found this line:

Can't connect to MySQL server on 'controller'

So it seems that our DB is not up and running. Let’s check the db state:

service mysql status

Hmm, it’s up! If you restart the services now, everything will work. The problem is that in OpenStack, there is no check if the db is actually alive, it just issues the start commands and moves on. If for some reason the db is not accepting connections when nova services start, they will not function. I did some closer inspection of the logs, and there is a nice 2 second gap between mariadb still trying to get everything up and nova services trying to connect. It would be a lot easier if the db just didn’t start, it would be simple to adjust retries and delays (http://docs.openstack.org/juno/config-reference/content/list-of-compute-config-options.html#config_table_nova_database). Now we need to figure out how to delay the nova services by a couple of seconds or make the operating system try again when the services fail. A script after reboot checking the services and after finding a service not started it would try to restart them. That would work but it’s not neat. After banging my head to the OpenStack wall I couldn’t find an answer how to delay the start of services or do a retry. None of the delay options from the manual work. Well, a simple script will do the job for now.

Another thing. DO NOT MESS UP THE HOSTNAMES! I’ve done that twice now, stupid me, and this is what you get:

nova_service-list

Nova still does not understand that the host might change its hostname. If an existing host has a new hostname, it is considered a brand new host with new services. I had to clean my services list with nova service-delete ID. MAC address check, anyone?

OpenStack Juno Lab installation on top of VMware Workstation – Prep + Nova + Compute1

I had a busy fall changing jobs, so my OpenStack installation project was put aside. I joined EMC’s Enterprise Hybrid Cloud team as a Senior Solutions Architect to participate in the development of the product. Currently we have a Federation version of the EHC GA’d (using EMC and VMware products to deliver a solid foundation for our customers to build their cloud on), but later on there will be an OpenStack version coming out as well. Because of that, OpenStack is even more relevant to me although my time right now is committed to VMware products. I can’t go into details of the upcoming OpenStack version, but any hands on knowledge is important. There will be a lot of automation (we are talking about a cloud after all!), but that does not remove the need to know how to do stuff manually.

Since summer, a new release of OpenStack, Juno, has been released, so I decided to ditch Icehouse. OpenStack is being developed at the speed of light, so much of the installation issues with previous versions have been fixed. My previous post is still relevant on the prep of the VMs if you decide to run OpenStack lab deployment as VMs on ESXi. Follow my post to create a template which you can use on different OpenStack components. Have a look at the Juno installation manual, there are less steps that are required for the base machine. Also make a decision this point if you are going with Neutron or nova-network (aka legacy networking). This will affect your network settings for the nodes.

The requirements for minimal installation with CirrOS are quite low, so we can use a base machine with 2GB of RAM for all the components (networking node only needs 512 MB). I also noticed that the current installation manual for Juno takes note on running OpenStack inside VMs, like we are doing here. The need for promiscuous mode support and disabled MAC address filtering has been noted (hurray!). Note that you only need promiscious mode enabled and mac address filtering disabled for external network! You can follow my previous post on how to do it on ESXi. Promiscuos mode is disabled by default, so it needs to be changed. MAC address forging detection and filtering are already disabled so we can leave those be. For this build, I’m actually using VMware Workstation 9. Depending if you are using Linux as your underlying OS or Windows, things differ how to enable promiscuous mode. I’m running Windows 7, so all I need to do is enable promiscuous mode in the vmx file of my VMs. When using Workstation on Windows, promiscuous mode should be enabled by default. Just to make sure and to avoid issues later, let’s edit the vmx file and add this line:

ethernet0.noPromisc = "false"

This will enable promiscuous mode for eth0. More vmx tweaking can be found here (http://sanbarrow.com/vmx/vmx-network-advanced.html). If you want to be exact, you should only do it for nics that are used for external networks. I had so many issues with this using Icehouse, so I am being paranoid and I will enable it for all of my nics. Since this is lab enviroment, it doesn’t matter that much.

If you are running Linux, take a look at here:
https://pubs.vmware.com/workstation-9/index.jsp?topic=%2Fcom.vmware.ws.using.doc%2FGUID-089D2595-26C5-433B-9DA4-D2A94C63B7B5.html

After these steps you can continue installing OpenStack components using the official installation manual for Juno with Ubuntu. I won’t go into every command, because the manual is quite good. There are a few notes however, that I would like to share. First of all, OpenStack is using MariaDB nowadays. Won’t affect anything, but it was a nice surprise. PostgreSQL is also supported, by the way.qemu

The manual notes that you can enable verbose mode for all of the components. As a learning experience, I strongly recommend that you do so. Something WILL go wrong, and chatty logs are good for that. On that note, one major issue that I had with compute node was with hypervisors. KVM requires hardware assisted virtualization to work. We can enable this on our VM environment (https://communities.vmware.com/docs/DOC-8970), but that won’t save you. I had huge issues with KVM on Icehouse, and switching to QEMU helped a lot. Things might have progressed since, but for now I’m going with QEMU. After I get my setup to work, I will definitely give KVM another go. If you try KVM, make this change to your vmx file:

vcpu.hotadd = "FALSE"

That’s it, let’s start typing some commands!

Automatisation of RecoverPoint Virtual Appliance installation Part 2: Expect

run_expectAs I mentioned in my previous post, automatic installation of virtual appliances is not a trivial task. In automation projects, we tend to concentrate on basic operational tasks, like automating the creation of a multi-tier vApp. But sometimes we need to have new kind of functionality in the environment, like data replication. My previous post showed how we can deploy a virtual appliance from OVF-file using vCenter Orchestrator and ovftool. Now we need to actually implement the appliance. Usually there is a wizard for implementing these appliances, and EMC RecoverPoint is no different. If we can do these configurations from CLI, then automation is possible. Even better, if there is a set of commands that are used from Bash, configuration is easy. When it comes to RecoverPoint however, we do have CLI but it is strictly wizard based and you don’t have access to Bash. One workaround without some serious hacking for this problem is Expect. With this tool, we can emulate a user going through the wizard and making choices during the installation. You cannot install Expect on RecoverPoint appliance, so you need a Linux box that is used as a configuration server and a jump box. I used the same Centos Linux VM that has my ovftool installed. The installation of Expect is straightforward:

yum install expect

You can use SSH and Expect together, so you can form an SSH session to RecoverPoint appliance and run Expect from a remote server. The actual Expect code is easy. You simply wait until a particular string appears on the console, for instance “Enter IP address”. When Expect finds this string, it can insert an answer for that question, i.e. “192.168.0.1”. We need to parse through the configuration process, record the answers we would give and turn that into Expect language. Unfortunately I don’t have a VNX system with iSCSI ports in my lab, so I couldn’t finish my code, but the principle of the solution works, you just need IQNs to integrate. After that we can use the CLI to configure LUNs for RP and start the actual replication of data. When the LUN is protected, we can use vCenter Orchestrator to migrate the selected VMs to protected LUNs and we are done! The necessary files can be found at the end of this post. Have fun!

The bash script that I called from a vCO workflow looks like this:

#!/usr/bin/expect -f
# vRPA login information
set USER "boxmgmt"
set PASSWORD "boxmgmt"
set IP "192.168.0.179"
#VNX settings
set VNXSN "CKM00112233444"
set VNXNAME "VNX5500"
set SPA "192.168.0.124"
set SPB "192.168.0.125"
set CS "192.168.0.123
set ISCSI1VNX "192.168.0.156"
set ISCSI2VNX "192.168.0.157"
set ISCSI3VNX "192.168.0.158"
set ISCSI4VNX "192.168.0.159"
set VNXUSER "sysadmin"
set VNXPASSWORD "sysadmin"
#vRPA LAN/MGMT settings
set LANMASK "255.255.255.0"
set LANGW "192.168.0.1"
set LANVIP "192.168.0180"
set RPA1LAN "192.168.0.181"
set RPA2LAN "192.168.0.182"#vRPA WAN settings
set WANMASK "255.255.255.0"
set WANGW "192.168.0.1"
set RPA1WAN "192.168.0.183"
set RPA2WAN "192.168.0.184"
#vRPA iSCSI settings
set ISCSIMASK "255.255.255.0"
set ISCSIGW "192.168.0.1"
set RPA1ISCSI1 "192.168.0.185"
set RPA1ISCSI2 "192.168.0.186"
set RPA2ISCSI1 "192.168.0.187"
set RPA2ISCSI2 "192.168.0.188"
#vRPA General settings
set DNS1 "192.168.0.4"
set DNS2 ""
set NTP "192.168.0.4"
set DOMAINNAME "demo.lab"
set CLUSTERNAME "RP"
set NUMBEROFRPAS "1"
set TIMEZONE "+2:00"
set CITY "26"

# SSH to RecoverPoint Appliance and start the Configuration Wizard
spawn ssh $USER@$IP
expect "Password:"
send "$PASSWORD\r"
expect "Do you want to configure a temporary IP address?"
send "n\r"
expect "Enter your selection"
send "1\r"
expect "Enter your selection"
send "1\r"
expect "Are you installing the first RPA in the cluster"
send "y\r"

# Cluster settings
expect "Press ENTER to move to next page"
send "\r"
expect "Enter cluster name"
send "$CLUSTERNAME\r"
expect "Enter the number of RPAs in the cluster"
send "$NUMBEROFRPAS\r"
expect "Enter time zone"
send "$TIMEZONE\r"
expect "Enter your selection"
send "$CITY\r"
expect "Enter primary DNS"
send "$DNS1\r"
expect "Enter secondary DNS"
send "$DNS2\r"
expect "Enter domain name"
send "$DOMAINNAME\r"
expect "Enter addresses of host names of NTP servers"
send "$NTP\r"
expect "Press ENTER to move to next page"
send "\r"

# LAN
expect "Select network interface IP version"
send "1\r"
expect "Enter default IPv4 gateway"
send "$LANGW\r"
expect "Enter interface mask"
send "$LANMASK\r"
expect "Enter RPA 1 IP address"
send "$RPA1LAN\r"
expect "Press ENTER to move to next page"
send "\r"

#WAN
expect "Select network interface IP version"
send "1\r"
expect "Enter interface mask"
send "$WANMASK\r"
expect "Enter RPA 1 IP address"
send "$RPA1WAN\r"
expect "Press ENTER to move to next page"
send "\r"

#iSCSI port 1, eth2
expect "Do you want the RPA to require CHAP"
send "n\r"
expect "Select network interface IP version"
send "1\r"
expect "Enter interface mask"
send "$ISCSIMASK\r"
expect "Enter RPA 1 IP address"
send "$RPA1ISCSI1\r"

#iSCSI port 2, eth3
expect "Select network interface IP version"
send "1\r"
expect "Enter interface mask"
send "$ISCSIMASK\r"
expect "Enter RPA 1 IP address"
send "$RPA1ISCSI2\r"
expect "Press ENTER to move to next page"
send "\r"

#VNX
expect "Enter a name for the storage array"
send "$VNXNAME\r"
expect "Does the storage array require CHAP"
send "n\r"

# VNX iSCSI port 1
expect "Select network interface IP version"
send "1\r"
expect "Enter IP address"
send "$ISCSI1VNX\r"
expect "Enter the iSCSI port number"
send "3260\r"

# VNX iSCSI port 2
expect "Select network interface IP version"
send "1\r"
expect "Enter IP address"
send "$ISCSI2VNX\r"
expect "Enter the iSCSI port number"
send "3260\r"

expect "Do you want to add another iSCSI storage port"
send "n\r"
expect "Do you want to add another storage iSCSI configuration"
send "n\r"
expect "Press ENTER to move to next page"
send "\r"

expect "Do you want to add a gateway"
send "n\r"
expect "Press ENTER to move to next page"
send "\r"
expect "Press enter to continue"
send "\r"

expect "Do you want to apply these configuration settings now"
send "y\r"
interact

Here are the workflows for ovftool and Expect.
Deploy vRPA with ovftool
Helper workflow for using ovftool
Run Expect on remote host

Prerequisites for the workflows:

  • a Linux VM with SSH enabled
  • ovftool installed
  • expect installed
  • expect script copied to the VM