Confused? I know I am. The container world is moving fast, and VMware is pushing new products left and right. Well, most of them have been around for some time, but not necessarily as finished products. The VMware container strategy is shaping up, so let’s dive into the deep end and see what is going on.
As always, the heart of VMware world is vSphere. The idea is that you can run both your traditional VM and container workloads on the same infrastructure utilizing the same server, storage and network capacity. There are pros and cons with this approach, but the major benefit is that you manage a single platform and can use NSX-v and all the SDN goodies with both VMs and containers without modifications. NSX-T might change this in the future, but right now you are somewhat limited on what you can do with SDN in container level networks.
There are a plethora of different products that tap into the whole container world. VMware has products for both managing and running Docker containers on top of vSphere. You have some options how to build the container infrastructure, you don’t necessarily have to use any of these products even when using vSphere as your platform. You can easily just use a Docker Host such as CoreOS on vSphere and start deploying containers. Frankly, if you are not going to use the advanced features of VMware such as NSX, there might not be enough benefits in running containers inside VMs. The biggest gains come from the integration that VMware has built, so you don’t have to manage (and pay for) two separate infrastructures for both use cases.
The base for containers on vSphere is the Photon OS. It’s a lightweight Linux OS designed to run Docker containers on vSphere. Most of the other container related VMware products utilize Photon OS under the hood. It has Docker built in (but it needs to be enabled), and it is a fully VMware supported Linux distro. You can use Photon as standalone VMs, but there are better ways to manage them. Photon is deployed with an OVA package.
VMware Integrated Containers (VIC) is the container engine that manages the Docker hosts. This is where the vSphere integration really begins. VIC is used to deploy VMware Container Hosts (VCH) using either the vic-machine CLI tool or the VIC Plugin in the HTML5 vSphere Web Client. VCH is the equivalent to Docker Host, but made for vSphere environment. VCH is a vApp (pre v.1.3) or a Resource Pool that contains the VCH API Endpoint VM (based on Photon OS). It basically runs Docker Remote API so you can use it as a remote host for Docker CLI commands. The supported way to deploy containers is the Docker client and VMware’s Project Admiral. These containers are deployed inside the Resource Pool. You specify the VCH host during deployment, and VCH handles the necessary Resource Pool configuration to deploy a container. It should be noted that you don’t deploy containers inside VMs, you deploy them as VMs. One container, one VM. Since the containers are VMs, they are spread over the hosts on the vSphere cluster, and they utilize all the normal vSphere stuff like DRS and HA (if they are persistent). Of course the networks span over hosts, so you won’t have issues communicating between containers on different hosts. Same goes with isolation, you can use standard NSX functionality for that if you need to go beyond the regular vSphere networking.
VIC is a standalone VM that is used for VCH deployment. It handles all the configuration needed for the actual Resource Pool that is deployed. The vic-machine CLI tool is used for this. VIC has a GUI as well, but it can’t deploy production grade VCHs, it’s only suitable for testing purposes. With the CLI tool, you can modify the deployment as necessary, insert certificates, apply network configurations, etc. VIC also has Project Admiral management interface built in, called vSphere Integrated Containers Management Portal (catchy name). If you are planning to use vRA as your end user portal, you can’t use VIC Management Portal because the VCH should only be attached to single Project Admiral instance. vRA also has Admiral built in, so you will have to choose which one you will utilize. In case you are running an external PSC server with your vCenter, note that you would need to use VIC version 1.2.1 at minimum. This is especially important if you are an EHC customer, since EHC only supports external PSC.
VCH has some benefits and also some drawbacks. The greatest benefit in my opinion is the network design. You can easily separate the client, public, container and management networks using vSphere networking for the VCH. While this is all doable in a regular Docker environment, with VCH you can also use all the NSX features such as microsegmentation on container level instead of container host level! The integration to docker networks is not 100% complete, however. For example, you can’t create NSX isolated networks directly using “docker network” command. The containers can either use a bridge network or a separate container network for communication. The VMware approach allows a high level of isolation using the existing tools and products. The isolation also spans to the kernel level, since the container VMs, unlike regular containers, do not share a kernel.
The biggest drawback is that VCH only supports a subset of docker/docker-compose commands. Most notably, “docker build” and “docker push” are missing. In case you need to manually use these commands during development, you would have to use a separate Docker Host for that. With the new 1.3 version of VIC, VMware provides a default container image called dch-photon, which can be used to deploy a standard Docker Host that supports all the docker commands. The other issue is that the containers are deployed as VMs. Although this makes sense from the networking point of view, they are inherently slower to deploy and not as resource effective as a Docker Host on bare metal servers would be. Then again, since they are VMs, all the vSphere resource isolation features apply as well.
The Project Harbor is an additional VMware opensource project that can be utilized with container infrastructure. It is an enterprise grade private Docker Registry for storing and distributing container images. You can install it basically on any Linux running Docker, but I chose Photon just to keep everything on the same platform. An OVA is also available. Harbor runs 7 different containers to provide the registry service with some additional features like Replication and Vulnerability Scanning with Clair. Based on the github stats, Project Harbor seems to be the most popular VMware opensource project.
CodeStream is VMware’s answer to continuous delivery and CD/CI pipelines. It can integrate with Jenkins and vRA among others to bring code development pipelines into the VMware realm. You don’t have to use just containers, traditional VMs work just as well. You can also call vRA blueprints from the pipeline which makes this very appealing for vRA customers. CodeStream is baked into the vRA appliance. While this sounds very convenient, CodeStream does not support HA mode in vRA. In practice this means that you cannot use your production vRA and enable CodeStream. Instead you need a separate appliance (or two in production), which means a separate GUI for managing CodeStream outside of vRA. You can, however, use your existing vRA and vRO environments as part of the pipeline and deploy VMs or Containers into the production environment or run vRO code.
This is just a mere scratch on these products and their capabilities. You got plenty of options how to introduce containers in your environment, but VIC is a simple and a fast approach if your main workloads already run on vSphere.