VMware VCP 2020 vSphere 7 Edition Exam Prep Guide (pt.2)

Picking up where we left off, here is Section 2. Once again, this version has been shaken up quite a bit from previous VCP objectives; this section is a bit lighter than Section 1. Let’s dig in.

Section 2 – VMware Products and Solutions

Objective 2.1 – Describe the role of vSphere in the software-defined data center (SDDC)

While I think most are acquainted with what VMware is referring to when they say SDDC or Software-Defined Data Center, let us do a quick refresh for anyone that may not be aware.

VMware’s vision is a data center that is fully virtualized and completely automated. The end goal is where all these different pieces are delivered as a service. vSphere is one of the main cornerstones and what makes the rest of this vision possible. What does this look like? Here is a picture (credit to VMware)

The bottom layer is hardware. From there, the next layer is vSphere, which provides software-defined compute and memory. Next, we see vSAN, which provides software-defined storage—finally, NSX, which provides software-defined networking. Cloud Management is the next layer up.

Becoming cloud-like is the goal. Why? Cloud services are mobile, easy to move around as needed, and are easy to start up and scale, both up and down. With a self-service portal and cloud-like services, requests that previously took weeks or months to fulfill now take hours or even minutes. Using automation to deliver these services ensure it’s done the same way, every time. Using automation also makes sure it’s easy to track requestors and do appropriate charge-backs. vRealize Operations ensure that you quickly see and are notified if low on resources and when to plan for more. Site Recovery Manager and vSphere Replication enable you to continue offering those services even in the case of disaster. But it all begins with vSphere.

Objective 2.2 – Identify use cases for vCloud Foundation

vCloud Foundation is a large portion of that SDDC picture above, but instead of needing to install each piece manually, it gives you that easy install button. This easy button comes in two ways – first from an appliance called VMware Cloud Builder. This appliance initially was a way to help VMware professional services to implement VMware Validated Designs. It released to the general public in January of 2019. The appliance itself can deploy the full SDDC stack, including:

        • VMware ESXi
        • VMware vCenter Server
        • VMware NSX for vSphere
        • VMware vRealize Suite Lifecycle Manager
        • VMware vRealize Operations Manager
        • VMware vRealize Log Insight
        • Content Packs for Log Insight
        • VMware vRealize Automation
        • VMware vRealize Business for Cloud
        • VMware Site Recovery Manager
        • vSphere Replication

The second easy button is an appliance that is installed in vCloud Foundations called SDDC Manager. This tool automates the entire lifecycle management from bring-up, to configuration and provisioning, and updates and patching. Not only for the initial management cluster but infrastructure and workload clusters as well. It also makes deploying VMware Kubernetes much easier. For VMware vCloud Foundations, the Cloud Builder appliance only installs the following:

        • SDDC Manager
        • VMware vSphere
        • VMware vSAN
        • NSX for vSphere
        • vRealize Suite

We now have a better understanding of what vCloud Foundations is, let us talk use cases. VMware has highlighted the main ones here. Those use cases are:

        • Private and Hybrid Cloud
        • Modern Apps (Development)
        • VDI (Virtual Desktop Infrastructure)

It’s an exciting product, and VMware says that it simplifies management and deployment and reduces operational time. If you want to take a look at it, there are free Hands-On Labs VMware has made available here.

Objective 2.3 – Identify migration options

One of the coolest, in my opinion, features of vSphere is the ability to migrate VMs. The first iteration of this was in VMware Virtual Center 1.0 in 2003. Specifically, this was a live migration. A live migration is a virtual machine running an application that could move to another host, with no interruption. This was amazing for the time, and it’s still a fantastic feature today. There are several different types of migrations. They are:

        • Cold Migration – This migration is moving a powered-off or suspended VM to another host
        • Hot Migration – This migration involves moving a powered-on VM to another host.

Additionally, different sub-types exist depending on what resource you want to migrate. Those are:

        • Compute only – This is migrating a VM (compute and memory), but not it’s storage to another host.
        • Storage only – This is migrating a VM’s storage, but not compute and memory to another datastore.
        • Both compute and storage – This is how it sounds. Moves both compute memory and storage to a different location.

Previously these migrations were known as a vMotion (compute only), svMotion (storage only), and xvMotion or Enhanced vMotion (both compute and storage). To enable hosts to use this feature, hosts on both sides of the migration must have a VMkernel network adapter enabled for vMotion. Other requirements include:

        • If a compute migration, both hosts must be able to access the datastore where the VM’s data resides.
        • At least a 1 Gb Ethernet connection
        • Compatible CPUs (or Enhanced vMotion Compatibility mode enabled on the cluster.)

Another type of migration is a cross vCenter Migration. This migrates a VM between vCenter Systems that are connected via Enhanced Link Mode. TheirvCenter’s times must be synchronized with each other, and they must both be at vSphere version 6.0 or later. Using cross vCenter Server migration, you can also perform a Long-Distance vSphere vMotion Migration. This type of migration is a vMotion to another geographical area within 150 milliseconds latency of each other, and they must have a connection speed of at least 250 Mbps per migration.

Now that we have identified the types of migrations, what exactly is vSphere doing to work this magic? When the administrator initiates a compute migration :

        • A VM is created on the destination host called a “shadow VM.”
        • The source VM’s memory is copied over the vMotion network to the destination’s host VM. The source VM is still running and being accessed by users during this, potentially updating memory pages.
        • Another copy pass starts to capture those updated memory pages.
        • When almost all the memory has been copied, the source VM is stunned or paused for the final copy and transfer of the device state.
        • A Gratuitous ARP or GARP is sent on the subnet updating the VM’s location, and users begin using the new VM.
        • The source VM’s memory pages are cleaned up.

What about a storage vMotion?

        • Initiate the svMotion in the UI.
        • vSphere uses something called the VMkernel data mover or if you have a storage array that supports vSphere Storage APIs Array Integration (VAAI) to copy the data.
        • A new VM process is started
        • Ongoing I/O is split using a “mirror driver” to be sent to the old and new virtual disks while this is ongoing.
        • vSphere cuts over to the new VM files.

Migrations are useful for many reasons. Being able to relocate a VM off one host or datastore to another enables sysadmins to perform hardware maintenance, upgrade or update software, and redistribute load for better performance. You can enable encryption for migration as well to be more secure—a massive tool in your toolbox.

Objective 2.4 – Identify DR use cases

Many types of disasters can happen in the datacenter. From something smaller such as power outage of a host to large, major scale natural disasters, VMware tries to cover you with several types of DR protection.

High Availability (HA):

HA works by pooling hosts and VMs into a single resource group. Hosts are monitored, and in the event of a failure, VMs are restarted on another host. When you create a HA cluster, an election is held, and one of the hosts is elected master. All others are subordinates. The master host has the job of keeping track of all the VMs that are protected and communication with the vCenter Server. It also needs to determine when a host fails and distinguish that from when a host no longer has network access. Hosts communicate with each other over the management network. There are a few requirements for HA to work.

        • All hosts must have a static IP or persistent DHCP reservation
        • All hosts must be able to communicate with each other, sharing a management network

HA has several essential jobs. One is determining priority and order that VMs are restarted when an event occurs. HA also has VM and Application Monitoring. The VM monitoring feature directs HA to restart a VM if it doesn’t detect a heartbeat received from VM Tools. Application Monitoring does the same task with heartbeats from an application. VM Component Monitoring or VMCP allows vSphere to detect datastore accessibility and restart the VM if a datastore is unavailable. For exam takers, in the past, VMware tried to trick people on exams by using the old name for HA, which was FDM or Fault Domain Manager

There are several options in HA you can configure. Most defaults will work fine and don’t need to be changed unless you have a specific use case. They are:

  • Proactive HA – This feature receives messages from a provider like Dell’s Open Manage Integration plugin. Based on those messages, HA migrates VMs to a different host due to the possible impending doom of a host. It makes recommendations in Manual mode or automatically moves them in Automatic mode. After VMs are off the host, you can choose how to remediate the sick host. You can place it in maintenance mode, which prevents running any future workloads on it. Or you could put it in Quarantine mode, which allows it to run some workloads if performance is low. Or a mix of those with…. Mixed Mode.
  • Failure Conditions and responses – This is a list of possible host failure scenarios and how you want vSphere to respond to them. This is better and gives you way more control than in past versions (5.x).
  • Admission Control – What good is a feature to restart VMs if you don’t have enough resources to do so? Admission Control is the gatekeeper that makes sure you have enough resources to restart your VMs in case of a host failure. You can ensure resource availability in several ways. Dedicated failover hosts, cluster resource percentage, slot policy, or you can disable it (not useful unless you have a specific reason). Dedicated hosts are dedicated hot spares. They do no work or run VMs unless there is a host failure. This is the most expensive (other than failure itself). Slot policy takes the largest VM’s CPU and the largest VM’s memory (can be two different VMs) and makes that into a “slot” then it determines how many slots your cluster can satisfy. Then it looks at how many hosts can fail and still keep all VMs powered on based on that base slot size. Cluster Resources Percentage looks at total resources needed and total available and tries to keep enough resources free to permit you to lose the number of hosts you specify (subtracting the number of resources of those hosts). You can also override this and set aside a specific percentage. For any of these policies, if the cluster can’t satisfy resources for more than existing VMs in the case of a failure, it prevents new VMs from powering on.
  • Heartbeat Datastores – Used to monitor hosts and VMs when the HA network has failed. It determines if the host is still running or if a VM is still running by looking for lock files. This automatically uses at least 2 datastores that all the hosts are connected to. You can specify more or specific datastores to use.
  • Advanced Options – You can use this to set advanced options for the HA Cluster. One might be setting a second gateway to determine host isolation. To use this, you need to set two options.
    1) das.usedefaultisolationaddress and
    2) das.isolationaddress[…]

    The first specifies not to use the default gateway, and the second sets additional addresses.

There are a few other solutions that touch more on Disaster Recovery.

Fault Tolerance

While HA keeps downtime to a minimum, the VM still needs to power back on from a different host. If you have a higher priority VM that can’t withstand almost any outage, Fault Tolerance is the feature you need to enable.

Fault Tolerance or FT creates a second running “shadow” copy of a VM. In the event the primary VM fails, the secondary VM takes over, and vSphere creates a new shadow VM. This feature makes sure there is always a backup VM running on a second, separate host in case of failure. Fault Tolerance has a higher resource cost due to higher resilience; you are running two exact copies of the same VM, after all. There are a few requirements for FT.

        • Supports up to 4 FT VMs with no more than 8 vCPUs between them
        • VMs can have a maximum of 8vCPUs and 128 GB of RAM
        • HA is required
        • There needs to be a VMkernel with the Fault Tolerance Logging role enabled
        • If using DRS, EVC mode must be enabled.

Fault Tolerance works essentially by being a vMotion that never ends. It uses a technology called Fast Checkpointing to take checkpoints of the source VM every 10 milliseconds or so and send that data to the shadow VM. This data is sent using a VMkernel port with Fault Tolerance logging enabled. There are two files behind the scenes that are important. One is shared.vmft and .ft-generation. The first is to make sure the UUID or identifier for the VM’s disk stays the same. The second is in case you lose connectivity between the two. That file determines which VM has the latest data and that VM is designated the primary when they are both back online.

vSphere Replication

Remote site Disaster Recovery options include vSphere Replication and Site Recovery Manager. You can use vSphere Replication or both in conjunction to replicate a site or individual VMs in case of failure or disaster. While I’m not going to delve deep into vSphere Replication or SRM, you should know their capabilities and, at a high level, how they work.

vSphere Replication is configured on a per-VM basis. Replication can happen from a primary to a secondary site or from multiple sites to a single target site. It uses a server-client model with appliances on both sides. A VMkernel with the vSphere Replication and vSphere Replication NFC (network file copy) role can exist to create an isolated network for replication.

Once you have your appliances setup and you choose which VMs you want to be replicated, you need to figure out what RPO to enable. RPO is short for Recovery Point Objective. RPO is how often you want it to replicate the VM and can be as short as 5 minutes or as long as every 24 hours.

Site Recovery Manager uses vSphere Replication but is much more complex and detailed. You can specify runbooks (recovery plans), how to bring the other side up, test your failovers, and more.

The above tools are in addition to VMware’s ability to integrate with many companies to do backups.

Objective 2.5 – Describe vSphere integration with VMware Skyline

VMware Skyline is a product available to VMware supported customers with a current Production or VMware Premier Support contract. What is it? A proactive support service integrated with vSphere, allowing VMware support to view your environment’s configurations and logs needed to speed up the resolution to a problem.

Skyline does this in a couple of ways. Skyline has a Collector appliance and a Log Assist where it can upload log files directly to VMware (with customer’s permission). Products supported by Skyline include vSphere, NSX for vSphere, vRealize Operations, and VMware Horizon. If you want to learn even more, visit the datasheet here.

That covers the second section. The next post is coming soon.

VMware VCP 2020 vSphere 7 Edition Exam Prep Guide

Introduction

Hello again. My 2019 VCP Study Guide was well received, so, to help the community further, I decided to embark on another exam study guide with vSphere 7. This guide is exciting for me to write due to the many new things I’ll get to learn myself, and I look forward to learning with everyone.

I am writing this guide pretty much how I talk and teach in real life with a bit of Grammarly on the back end, to make sure I don’t go completely off the rails. You may also find the formatting a little weird. This is because I plan on taking this guide and binding it in a single guide at the end of this blog series. I will try to finish a full section per blog post unless it gets too large. I don’t have a large attention span to read huge technical blogs in one sitting and find most people learn better with smaller chunks of information at a time. (I wrote this before I saw the first section.)

In these endeavors, I personally always start with the Exam Prep guide. That can be found on VMware’s website here. The official code for this exam is 2VO-21.20, and the cost of the exam is $250.00. There are a total of 70 questions with a duration of 130 minutes. The passing score, as always, is 300 on a scale of 1-500. The exam questions are presented in a single and multiple-choice format. You can now take these exams online, in the comfort of your own home. A webcam is required, and you need to pan your webcam at the beginning of the session, and it needs to be on the whole time.

The exam itself focuses on the following topics:

  • Section 1 – Architecture and Technologies
  • Section 2 – Products and Solutions
  • Section 3 – Planning and Designing
  • Section 4 – Installing, Configuring, and Setup
  • Section 5 – Performance-tuning, Optimization, and Upgrades
  • Section 6 – Troubleshooting and Repairing
  • Section 7 – Administrative and Operational Tasks

Each of these topics can be found in the class materials for Install, Configure, and Manage, or Optimize and Scale classes, or supplemental papers by VMware on the web. Let’s begin with the first topic.

Section 1 – Architectures and Technologies

Objective 1.1 – Identify the pre-requisites and components for a vSphere Implementation

A vSphere implementation or deployment has two main parts. ESXi server and vCenter Server.

ESXi Server

The first is the virtual server itself or ESXi server. The ESXi host server is the piece of the solution that allows you to run virtual machines and other components of the solution (such as NSX kernel modules). It provides the compute, memory, and in some cases, storage resources for a company to run. There are requirements the server needs to meet for ESXi. They are:

  • A supported hardware platform. VMware has a compatibility guide they make available here. If running a production environment, your server should be checked against that.
  • ESXi requires a minimum of two CPU cores.
  • ESXi requires the NX/XD or No Execute bit enabled for the CPU. The NX/XD setting is in the BIOS of a server.
  • ESXi requires a minimum of 4 GB of RAM. It would be best if you had more to run a lot of the workloads a business requires, however.
  • The Intel VT-x or AMD RVI setting in the BIOS must be enabled. Most of the time, this is already enabled on servers, and you won’t need to worry about it.
  • 1+ Gigabit network controller is a requirement. Using the compatibility guide above, make sure your controller is supported.
  • SCSI disk or RAID LUN. Because of their higher reliability, ESXi calls them “local drives,” and you can use them as a “scratch” volume. A scratch partition is a disk partition used by VMware to host logs, updates, or other temporary files.
  • SATA drives. You can use these but are labeled “remote” drives. Because of being labeled “remote,” you can’t use them for a scratch partition.

vSphere 7.0 can be installed using UEFI BIOS mode or regular old BIOS mode. If using UEFI, you have a wider variety of drives you can use to boot. Once you use one of those modes (UEFI or Legacy) to boot from, it is not advisable to try to change after installed. If you do, you may be required to reinstall. The error you might receive is “Not a VMware boot bank.”

One significant change in vSphere 7.0 is system storage requirements. ESXi 7.0 system storage volumes can now occupy up to 138 GB of space. A VMFS datastore is only created if there is an additional 4 GB of space. If one of the “local” disks aren’t found, then ESXi operates in a degraded where the scratch disk is placed in a RAMDISK or all in RAM. This is not persistent through reboots of the physical machine and displays an unhappy message until you specify a location for the scratch disk.

Now that being said, you CAN install vSphere 7 on a USB as small as 8 GB. You should, if at all possible, use a larger flash device. Why? ESXi uses the additional space for an expanded core dump file, and it uses the other memory cells to prolong the life of the media. So try to use a 32 GB or larger flash device.

With the increased usage of flash media, VMware saw fit to talk about it in the install guide. In this case, it specifically called out using M.2 and other Non-USB low-end flash media. There are many types of flash media available on the market that have different purposes. Mixed-use case, high performance, and more. The use case for the drive should determine the type bought. VMware recommends you don’t use low-end flash media for datastores due to VMs causing a high level of wear quickly, possibly causing the drives to fail prematurely.

While the guide doesn’t ask to call this out, I thought it would be a good thing to show a picture of how the OS disk layout differs from the previous version of ESXi. You should know that when you upgrade the drive from the previous version, you can’t rollback.

vCenter Server

The ESXi has the resources and runs the virtual machines. In anything larger than a few hosts, management becomes an issue. vCenter Server allows you to manage and aggregate all your server hardware and resources. But, vCenter Server allows you to do so much more. Using vCenter Server, you can also keep tabs on performance, licensing, and update software. You can also do advanced tasks such as move virtual machines around your environment. Now that you realize you MUST have one, let’s talk about what it is and what you need.

vCenter is deployed on an ESXi host. So you have to have one of those running first. It is deployed using its included installer to the ESXi host, not as you would an OVA. The machine itself is upgraded from previous versions. It now contains the following:

  • Photon OS 3.0 – This is the Linux variant used by VMware
  • vSphere authentication services
  • PostgreSQL (v11.0) – Database software used
  • VMware vSphere Lifecycle Manager Extension
  • VMware vSphere Lifecycle Manager

But wait… there used to be the vSphere vCenter Server and Platform Services? You are correct. In the future, due to design flows and simplicity, etc., VMware combined all services into a single VM. So what services are actually on this machine now? I’m glad you asked.

  • Authentication Services – which includes
    • vCenter Single Sign-On
    • vSphere License Service
    • VMware Certificate Authority
  • PostgreSQL
  • vSphere Client -HMTL5 client that replaces the previous FLEX version (Thank God)
  • vSphere ESXi Dump Collector – Support tool that saves active memory of a host to a network server if the host crashes
  • vSphere Auto Deploy – Support tool that can provision ESXi hosts automagically once setup for it is completed
  • VMware vSphere Lifecycle Manager Extension – tool for patch and version management
  • VMware vCenter Lifecycle Manager – a tool to automate the process of virtual machines and removing them

Now that we have covered the components let’s talk deployment. You can install vCenter Server using either the GUI or CLI. If using the GUI install, there are two stages. The first stage installs the files on the ESXi host. The second stage configures parameters you feed into it. The hardware requirements have changed from the previous version as well. Here is a table showing the changes in green.

Objective 1.2 Describe vCenter Topology

Topology is a lot simpler to talk about going forward because there is a flat topology. There is no vCenter Server service and Platform Controllers anymore. Everything is consolidated into one machine. If you are running a previous version and have broken vCenter Server out into those roles, don’t despair! There are tools VMware has created that allow you to consolidate them back. There are a few things to add to that.

First, Enhanced Link Mode. This is where you can log into one vCenter and manage up to 15 total vCenter instances in a single Single Sign-On domain. This is where the flat topology comes in. Enhanced Link Mode is set up during the installation of vCenter. Once you exceed the limits of a vCenter, you install a new one and link it. There is also vCenter Server High Availablity. Later on, in this guide, we cover how its configured. For now, here is a quick overview of what it is.

vCenter High Availability is a mechanism that protects your vCenter Server against host and hardware failures. It also helps reduce downtime associated with patching your vCenter Server. It does this by using 3 VMs. It uses two full VCSA nodes and a witness node. One VCSA node is active and one passive. They are connected by a vCenter HA network, which is created when you set this up. This network is used to replicate data across and connectivity to the witness node.

For a quick look at vCenter limits compared to the previous version:

Objective 1.3 – Identify and differentiate storage access protocols for vSphere (NFS, iSCSI, SAN, etc.)

The section I wrote in the previous guide still covers this well, so I am using that.

Local Storage
Local storage is storage connected directly to the server. This includes a Direct Attached Storage (DAS) enclosure that connects to an external SAS card or storage in the server itself. ESXi supports SCSI, IDE, SATA, USB, SAS, flash, and NVMe devices. You cannot use IDE/ATA or USB to store virtual machines. Any of the other types can host VMs. The problem with local storage is that the server is a single point of failure or SPOF. If the server fails, no other server can access the VM. There is a unique configuration that you can use that would allow sharing local storage, however, and that is vSAN. vSAN requires flash drives for cache and either flash or regular spinning disks for capacity drives. These are aggregated across servers and collected into a single datastore or drive. VM’s are duplicated across servers, so if one goes down, access is still retained, and the VM can still be started and accessed.
Network Storage
Network Storage consists of dedicated enclosures that have controllers that run a specialized OS on them. There are several types, but they share some things in common. They use a high-speed network to share the storage, and they allow multiple hosts to read and write to the storage concurrently. You connect to a single LUN through only one protocol. You can use multiple protocols on a host for different LUNs

Fibre Channel or FC is a specialized type of network storage. FC uses specific adapters that allow your server to access it, known as Fibre Channel Host Bus Adapters or HBAs. Fibre Channel typically uses cables of glass to transport their signal, but occasionally use copper. Another type of Fibre Channel can connect using a regular LAN. It is known as Fibre Channel over Ethernet or FCoE.

ISCSI is another storage type supported by vSphere. This uses regular ethernet to transport data. Several types of adapters are available to communicate to the storage device. You can use a hardware ISCSI adapter or software. If you use a hardware adapter, the server offloads the SCSI and possibly the network processing. There are dependent hardware and independent hardware adapters. The first still needs to use the ESXi host’s networking. Independent hardware adapters can offload both the ISCSI and networking to it. A software ISCSI adapter uses a standard ethernet adapter, and all the processing takes place in the CPU of the hosts.

VMware supports a new type of adapter known as iSER or ISCSI Extensions for RDMA. This allows ESXi to use RDMA protocol instead of TCP/IP to transport ISCSI commands and is much faster.

Finally, vSphere also supports the NFS 3 and 4.1 protocol for file-based storage. This type of storage is presented as a share to the host instead of block-level raw disks. Here is a small table on networked storage for more leisurely perusal.

Technology Protocol Transfer Interface
Fibre Channel FC/SCSI Block access FC HBA
Fibre Channel over Ethernet (FCoE) FCoE / SCSI Block access
  • Converged Network Adapter

  • NIC with FCoE support
ISCSI ISCSI Block access
  • ISCSI adapter (dependent or independent)

  • NIC (Software adapter)
NAS IP / NFS File level Network adapter

Objective 1.3.1 – Describe datastore types for vSphere

vSphere supports several different types of datastores. Some of them have features ties to particular versions, which you should know. Here are the types:

  • VMFS – VMFS can be either version 5 or 6. VMFS is the file system installed on a block storage device such as an ISCSI LUN or local storage. You cannot upgrade a datastore to VMFS 6 from 5. You have to create new and migrate VMs to it. On VMFS, vSphere handles all the locking of files and controls access to them. It is a clustering file system that allows access of files to more than one host at a time.
  • NFS – Version 3 and 4.1 are supported. NFS is a NAS file system accessed over a TCP/IP network. You can’t access the same volume using both versions at the same time. Unlike VMFS, the NAS device controls access to the files.
  • vSAN – vSAN aggregates local storage drives on a server into a single datastore accessible by the nodes in the vSAN cluster.
  • vVol – A vVol datastore is a storage container on a block device.

Objective 1.3.2 – Explain the importance of advanced storage configuration (VASA, VAAI, etc.)

This is the first time I’ve seen this covered in an objective. I like that some of the objectives are covering more in-depth material. It’s hard to legitimize the importance of them without describing them and what they do a bit. I will explain what they are and then explain why they are essential.

  • VASA – VASA stands for vSphere APIs for Storage Awareness. VASA is extremely important because hardware storage vendors use it to inform vCenter Server about their capabilities, health, and configurations. VASA is essential for vVols, vSAN, and Storage Policies. Using Storage Policies and VASA, you can specify that VMs need a specific performance profile or configuration, such as RAID type.
  • VAAI – VAAI stands for vSphere APIs for Array Integration. There are two APIs or Application Programming Interfaces, which are:
    • Hardware Acceleration APIs – This is for arrays to offload some storage operations directly to the array better. In turn, this reduces the CPU cycles needed for specific tasks.
    • Array Thin Provisioning APIs – This helps monitor space usage on thin-provisioned storage arrays to prevent out of space conditions, and does space reclamation when data is deleted.
  • PSA – PSA stands for Pluggable Storage Architecture. These APIs allow storage vendors to create and deliver specific multipathing and load-balancing plug-ins that are best optimized for specific storage arrays.

Especially with some of the technology VMware offers (vSAN), these APIs are undoubtedly helpful for sysadmins and your infrastructure. Being able to determine health and adequately fit and apply a customer’s requirements for a VM is essential for business.

Objective 1.3.3 – Describe Storage Policies

Storage Policies are a mechanism by which you can assign storage characteristics to a specific VM. Let me explain. Say you have a critical VM, and you want to make sure it sits on a datastore that is backed-up every 4 hours. Using Storage Policies, you can assign that to that VM. You can ensure that the only datastores that it can use are ones that satisfy that requirement. Or you need to limit a VM to a specific performance. You can do that via Storage Policies. You can create policies based on the capabilities of your storage array, or you can even create ones using tags. To learn even more, you can read about it in VMware’s documentation here.

Objective 1.3.4 – Describe basic storage concepts in K8s, vSAN, and vSphere Virtual Volumes (vVols)

K8s
I couldn’t find this in the materials listed, so I went hunting. For anyone wanting to read more about it, I found the info HERE.

vSphere with Kubernetes supports three types of storage.

  • Ephemeral virtual disks – As the name signifies, this storage is very much temporary—this type of virtual disk stores objects such as logs or other temporary data. Once the pod ceases to exist, so does this disk. This type of disk persists across restarts. Each pod only has one disk.
  • Container Image virtual disks – This disk contains the software that is to be run. When the pod is deleted, the virtual disks are detached.
  • Persistent volume virtual disks – Certain K8s workloads require persistent storage to save data independent of the pod. Persistent volumes objects are backed by First Class Disks or an Improved Virtual Disk. This First Class Disk is identified by UUIDs, which remain valid even if the disk is relocated or snapshotted.

vSAN
vSAN is converged, software-defined storage that uses local storage on all nodes and aggregates them into a single datastore. This usable by all machines in the vSAN cluster.

A minimum of 3 disks is required to be part of a vSphere cluster and enabled for vSAN. Each ESXi host has a minimum of 1 flash cache disk and 1 spinning or 1 flash capacity disk. A max of 7 capacity disks can be in a single disk group, and up to 5 disk groups can exist per host.

vSan is object-based, uses a proprietary VMware protocol to communicate over the network, and uses policies to enable features needed by VMs. You can use policies to enable multiple copies of data, performance throttling, or stripe requirements.

vVols
vVols shakes storage up a bit. How so? Typically you would carve storage out into LUNs, and then you would create datastores on them. The storage administrator would be drawn into architectural meetings with the virtualization administrators to decide on storage schemas and layouts. This had to be done in advance, and it was difficult to change later if something different was needed.

Another problem was that management such as speeds or functionality was controller at a datastore level. Multiple VMs are stored on the same datastore, and if they required different things, it would be challenging to meet their needs. vVols helps change that. It improves granular control, allowing you to cater storage functionality to the needs of individual VMs.

vVols map virtual disks and different pieces, such as clones, snapshots, and replicas, directly to objects (virtual volumes) on a storage array. Doing this allows vSphere to offload tasks such as cloning, and snapshots to the storage array, freeing up resources on the host. Because you are creating individual volumes for each virtual disk, you can apply policies at a much more granular level—controlling aspects such as performance better.

vVols creates a minimum of three virtual volumes, the data-vVol (virtual disk), config-vVol (config, log, and descriptor files), and swap-vVol (swap file created for VM memory pages). It may create more if there are other features used, such as snapshots or read-cache.

vVols start by creating a Storage Container on the storage array. The storage container is a pool of raw storage the array is making available to vSphere. Then you register the storage provider with vSphere. You then create datastores in vCenter and create storage policies for them. Next, you deploy VMs to the vVols, and they send data by way of Protocol Endpoints. The best picture I’ve seen I’m going to lift and use here from the Fast Track v7 course by VMware.

Objective 1.4 – Differentiate between vSphere Network I/O Control (NIOC) and vSphere Storage I/O Control (SIOC)

NIOC = Network I/O Control
SIOC = Storage I/O Control

Network I/O Control allows you to determine and shape bandwidth for your vSphere networks. They work in conjunction with Network Resource Pools to allow you to determine the bandwidth for specific types of traffic. You enable NIOC on a vSphere Distributed Switch and then set shares according to needs in the configuration of the VDS. This is a feature requiring Enterprise Plus licensing or higher. Here is what it looks like in the UI.

Storage I/O Control allows cluster-wide storage I/O prioritization. You can control the amount of storage I/O that is allocated to virtual machines to get preference over less critical virtual machines. This is accomplished by enabling SIOC on the datastore and set shares and upper limit IOPS per VM. SIOC is enabled by default on SDRS clusters. Here is what the screen looks like to enable it.

Objective 1.5 – Describe instant clone architecture and use cases

Instant Clone technology is not new. It was initially around in vSphere 6.0 days but was initially called VMFork. But what is it? It allows you to create powered-on virtual machines from the running state of another. How? The source VM is stunned for a short period. During this time, a new Delta disk is created for each virtual disk, a checkpoint created and transferred to the destination virtual machine. Everything is identical to the original VM. So identical, you need to customize the virtual hardware to prevent MAC address conflicts. You must manually edit the guest OS. Instant clones are created using API calls.

Going a little further in-depth, using William Lam’s and Duncan Epping’s blog posts here and here, we learn that as of vSphere 6.7, we can use vMotion, DRS, and other features on these instant clones. Transparent Page Sharing is used between the Source and Destination VMs. There are two ways instant clones are created. One is Running Source VM Workflow where a delta disk is created for each of the destination VMs created on the source VM. This workflow can cause issues the more of them created due to an excessive amount of delta disks on the source VM. The second is the Frozen Source VM Workflow. This workflow uses a single delta on the source VM and a single delta disk on each of the Destination VMs. This workflow is much more efficient. If you visit their blogs linked above, you can see diagrams depicting the two workflows.

Use cases (per Duncan) are VDI, Container hosts, Hadoop workers, Dev/Test, and DevOps.

Objective 1.6 – Describe Cluster Concepts

A vSphere cluster is a group of ESXi host machines. When grouped, vSphere aggregates all of the resources of each host and treats it as a single pool. There are several features and capabilities you can only do with clusters.

Objective 1.6.1 – Describe Distributed Resource Scheduler

vSphere’s Distributed Resource Scheduler is a tool used to keep VMs running smoothly. It does this, at a high level, by monitoring the VMs and migrating them to the hosts that allow them to run best. In vSphere 6.x, DRS ran every 5 minutes and concentrated on making sure the hosts were happy and had plenty of free resources. In vSphere 7, DRS runs every 60 seconds and is much more concentrated on VMs and their “happiness.” DRS scores each VM and, based on that, migrates or makes recommendations depending on what DRS is set to do. A bit more in-depth in objective 1.6.3.

Objective 1.6.2 – Describe vSphere Enhanced vMotion Compatibility (EVC)

EVC or Enhanced vMotion Compatibility allows you to take different processor generation hosts and still combine them and their resources in a cluster. Different generation processors have different features sets and options on them. EVC masks the newer ones, so there is a level feature set. Setting EVC means you might not receive all the benefits of newer processors. Why? A lot of newer processors are more efficient, therefore lower clock speed. If you mask off their newer feature sets (in some cases how they are faster), you are left with lower clock speeds. Starting with vSphere 6.7, you can enable EVC on a per VM basis allowing for migration to different clusters or across clouds. EVC becomes part of the VM itself. To enable per-VM EVC, the VM must be off. If cloned, the VM retains the EVC attributes.

Objective 1.6.3 – Describe how Distributed Resource Scheduler (DRS) scores virtual machines

VM “Happiness” is the concept that VMs have an ideal or best case throughput, or resource usage, and actual throughput. If there is no contention or competition on a host for a resource, those two should match, which makes the VM’s “happiness” 100%. DRS takes a look at the hosts in the cluster to determine if another host can provide a better score for the VM; it takes steps to migrate or recommend it to another host. Several costs are determined to see if it makes sense to move it. CPU costs, Memory costs, Networking Costs, and even Migration costs. A lower score does not necessarily mean that the VM is running poorly. Why? Some costs taken into account include if the host can accommodate a burst in that resource. The actual equation (thanks Niels Hagoort)

  • Goodness (actual throughput) = Demand (ideal throughput) – Cost (loss of throughput)
  • Efficiency = Goodness (actual throughput) / Demand (ideal throughput)
  • Total efficiency = EfficiencyCPU * EfficiencyMemory * EfficiencyNetwork
  • Total efficiency on host = VM DRS score

Keep in mind that the score is not indicative of a health score but an indicator of resource contention. A higher number indicates less resource contention, and the VM is receiving the resources it needs to perform.

Objective 1.6.4 – Describe vSphere High Availability

vSphere HA or High Availablity, is a feature designed for VM resilience. Hosts and VMs are monitored, and in the event of a failure, VMs restart on another host.

There are several configuration options to configure. Most defaults work well unless you have a specific use case. Let’s go through them:

  • Proactive HA – This feature receives messages from a provider like Dell’s Open Manage Integration plug-in and, based on those messages, migrate VMs to a different host due to the impending doom of the original host. It can make recommendations on the Manual mode or Automatically. After all VMs are off the host, you can choose how to remediate the sick host. You can either place it in maintenance mode, which prevents running any workloads on it. You can also put it in Quarantine mode, which allows it to run some workloads if performance is affected. Or a mix of those with…. Mixed Mode.
  • Failure Conditions and responses – This is a list of possible host failure scenarios and how you want vSphere to respond to them. This is expanded and gives you wayyy more control than in the past.
  • Admission Control – What good is a feature to restart VMs if you don’t have enough resources to do so? Not very. Admission Control is the gatekeeper that makes sure you have enough resources to restart your VMs in the case of a host failure. You ensure this a couple of ways. Dedicated failover hosts, cluster resource percentage, slot policy, or you can disable it. Dedicated hosts are like a dedicated hot spare in a RAID. They do no work or run VMs until there is a host failure. This is the most expensive option (other than failure itself). Slot policy takes the largest VM’s CPU and the largest VM’s memory (can be two different VMs) and makes that into a “slot.” It then determines how many slots your cluster can satisfy. Next, it looks at how many hosts can fail and keep all VMs powered on. Cluster Resources percentage looks at the total resources needed and total available and tries to keep enough to lose a certain number of hosts you specify. You can also override and set a specific percentage to reserve. For any of these policies, if the cluster can’t satisfy needed VMs, it prevents new VMs from turning on.
  • Datastore for Heartbeating – This is to monitor hosts and VMs when the HA network as failed. Using a datastore heartbeat can determine if the host is still running or if a VM is still running, by looking at the lock files. This setting automatically tries to make sure that it has at least 2 datastores connected to all the hosts. You can specify more or specific datastores to use.
  • Advanced Options – This option is to set advanced options for the HA Cluster. One such setting might be setting a second gateway to determine host isolation. To enable you need to set two options. 1) das.usedefaultisolationaddress and 2) das.isolationaddress[…] The first specifies not to use the default gateway, and the second sets additional addresses.

Objective 1.7 – Identify vSphere distributed switch and vSphere standard switch capabilities

VDS and VSS are networking objects in vSphere. VDS stands for Virtual Distributed Switch, and VSS is Virtual Standard Switch.

Virtual Standard Switch is the default switch. It is what the installer creates when you deploy ESXi. It has only a few features and requires you to configure a switch on every host manually. As you can imagine, this is tedious and difficult to configure the same every time, which is what you need to do for VM’s to move across hosts seamlessly. (You could create a host profile template to make sure they are the same.)

Standard Switches create a link between physical NICs and virtual NICs. You can name them essentially whatever you want, and you can assign VLAN IDs. You can shape traffic but only outbound. Here is a picture I lifted from the official documentation for a pictorial representation of a VSS.

VDSs, on the other hand, add a management plane to your networking. Why is this important? It allows you to control all host networking through one UI. Distributed switches require a vCenter and a certain level of licensing-Enterprise Plus or higher unless you buy vSAN licensing. Essentially you are still adding a switch to every host, just a little bit fancier one that can do more things, that you only have to change once to change all hosts.

There are different versions of VDS you can create, which are based on the version they were introduced. Each newer version adds features. A higher version retains all the features of the lower one and adds to it. Some features include Network I/O Control (NIOC), which allows you to shape your bandwidth incoming and outgoing. VDS also includes a rollback ability, so if you make a change and it loses connectivity, it reverts the changes automatically.

Here is a screenshot of me making a new VDS and some of the features that each version adds:

Here is a small table showing the differences between the switches.

Feature vSphere Standard Switch vSphere Distributed Switch
VLAN Segmentation Yes Yes
802.1q tagging Yes Yes
NIC Teaming Yes Yes
Outbound traffic shaping Yes Yes
Inbound traffic shaping No Yes
VM port blocking No Yes
Private VLANs No Yes (3 Types – Promiscuous, Community, Isolated)
Load Based Teaming No Yes
Network vMotion No Yes
NetFlow No Yes
Port Mirroring No Yes
LACP support No Yes
Backup and restore network configuration No Yes
Link Layer Discovery Protocol No Yes
NIOC No Yes

Objective 1.7.1 – Describe VMkernel Networking

VMkernel adapters are set up on the host, for the host itself to interact with the network. Your management and other functions of the host are taken care of by VMkernel adapters. The roles specifically are:

  • Management traffic – Using the VMkernel for this by selecting the checkbox, carries configuration and management communication for the host, vCenter Server, and HA traffic. When ESXi is first installed, a VMkernel adapter is created with management selected on it. You should have more than one VMkernel to carry management traffic for redundancy.
  • vMotion traffic – Selecting this enables you to migrate VMs from one host to another. Both hosts must have vMotion enabled. You can use multiple physical NICs for faster migrations. Be aware that vMotion traffic is not encrypted – separate this network for greater security.
  • Provisioning traffic – This is used for you to separate VM cold migrations, cloning, and snapshot migration. A use case could be VDI for this, or just using a slower network to keep live vMotions separated and not slowed by migrations that don’t need the performance.
  • IP Storage and discovery – This is not a selection box when you create a VMkernel, but still an important role. This role allows you to connect to ISCSI and NFS storage. You can use multiple physical NICs and “bind” each to a single VMkernel. This enables multipathing for additional throughput and redundancy.
  • Fault Tolerance traffic – One of the features you can enable, Fault Tolerance, allows you to create a second mirror copy of a VM. To keep both machines precisely the same requires a lot of network traffic. This role must be enabled and is used for that traffic.
  • vSphere Replication traffic – As it sounds like, this role handles the replication traffic sent to a vSphere Replication server.
  • vSAN traffic – If you have a vSAN cluster, every host that participates must have a vSAN VMkernel to handle and separate the large amount of traffic needed for vSAN. Movement of objects and retrieval requires a large amount of network bandwidth, so it would be best to have this on as fast of a connection as you can. vSAN does support multiple VMkernels for vSAN but not on the same subnet.

Objective 1.7.2 – Manage networking on multiple hosts with vSphere distributed switch

You should have a decent idea now of what a vSphere distributed switch is and what it can do. The next part is to show you what the pieces are and describe how to use them.

First, you need to create the vSphere distributed switch. Go to the networking tab by clicking on the globe in the HTML5 client. Then right-click on the datacenter and select Distributed Switch > New Distributed Switch

You must now give the switch a name – you should make it descriptive, so it’s easy to know what it does

Choose the version corresponding to the features you want to use.

You need to tell VMware how many uplinks per host you want to use. This is the number of physical NICs that are used by this switch. Also, select if you want to enable Network I/O Control and if you want vSphere to create a default port group for you – if so, give it a name.

Finish the wizard.

You can now look at a quick topology of the switch by clicking on the switch, then Configure and Topology.

After creating the vSphere distributed switch, hosts must be associated with it to use it. To do that, you can right-click on the vSphere distributed switch and click on Add and Manage Hosts.

You now have a screen that has the following options: Add Hosts, Manage host networking, and Remove hosts.

Since your switch is new, you need to Add hosts. Select that and on the next screen, click on New Hosts.

Select the hosts that you want to be attached to this switch and click OK and then Next again.

Now assign the physical NICs to an uplink and click Next

You can now move any VMkernel adapters over to this vSphere distributed switch if desired.

Same with VM networking

You now complete it. And of course, you notice you can make changes to all the hosts during the same process. This is one part of what makes vSphere distributed switches great.

Objective 1.7.3 – Describe Networking Policies

Networking policies are rules on how you want virtual switches, both standard or distributed, to work. Several policies can be configured on your switches. They apply at a switch level. If needed, however, you CAN override them at a port group level. Here is a bit of information on them:

Virtual Standard Switch Policies:

vSphere Distributed Switch Policies:

  • Traffic Shaping – This is different depending on which switch you are using. Standard switches can only do Egress (outgoing), and vSphere distributed switches can do ingress as well. You can establish an average bandwidth over time, peak bandwidth in bursts, and burst size.
  • Teaming and Failover – This setting enables you to use more than one physical NIC to create a team. You then select load balancing algorithms and what should happen in the case of a NIC failure
  • Security – Most homelabers know this setting due to needing to set Promiscuous Mode to allow nested VMs to talk externally. Promiscuous mode rejects or allows network frames to the VM. Mac Address Changes will either reject or allow MAC addresses different than the one assigned to the VM. Forged Transmits drop outbound frames from a VM with a MAC address different than the one specified for the VM in the .vmx configuration file.
  • VLAN – enables you to specify a VLAN type (VLAN, VLAN trunking, or Private PLAN) and assigns a value.
  • Monitoring – Using this, you can turn on NetFlow monitoring.
  • Traffic Filtering and marking – This policy lets you protect the network from unwanted traffic and apply tags to delineate types of traffic.
  • Port Blocking – This allows you to block ports from sending or receiving data selectively.

Objective 1.7.4 – Manage Network I/O Control on a vSphere distributed switch

One of the features that you can take advantage of on a vSphere distributed switch is NIOC or Network I/O Control. Why is this important? Using NIOC, you control your network traffic. You set shares or priorities to specific types of traffic, and you can also set reservations and hard limits. To get to it, select the vSphere distributed switch and then in the center pane, Configure, then Resource Allocation. Here is a picture of NIOC:

If you edit one of the data types, this is the box for that.

There are several settings to go through here. Let’s discuss them.

  • Shares – This is the weight you associate with the type of network traffic when there is congestion. You can assign Low, Normal, High, or Custom. Low = 25, Normal = 50, High = 100 shares. Custom can be any number you want it to be from 1-100. Shares do not equal percentage; in other words, the total doesn’t add up to 100%. If you have one with Normal shares of 50 and another with 100, the one with 100 will receive twice as much bandwidth as the one with 50. Again this only comes into play when there is network congestion.
  • Reservation – This is a guarantee that vSphere makes available to this type of traffic. If not needed, this bandwidth becomes available to other types of system traffic (not VM.) A maximum of 75% of the total bandwidth can be reserved
  • Limit – The maximum bandwidth allowed for that type of traffic. If the system has plenty of extra, it still won’t allow a limit to be exceeded.

You can also set up a custom type of traffic with the Network Resource Pool.

Objective 1.8 – Describe vSphere Lifecycle Manager concepts (baselines, cluster images, etc.)

Managing a large number of servers gets difficult and cumbersome quickly. In previous versions of vSphere, there was a tool called VUM or vSphere Update Manager. VUM was able to do a limited number of things for us. It could upgrade and patch hosts, install and update third-party software on hosts, and upgrade virtual machine hardware and VMware Tools. This was useful but left a few important things out. Things like hardware firmware and maintain a baseline image for cluster hosts. Well, fret no more! Starting with vSphere 7, a new tool called Lifecycle Manager was introduced. Here are some of the things you can do:

  • Check hardware of hosts against the compatibility guide, and vSAN Hardware Compatibility List
  • Install a single ESXi image on all hosts in a cluster
  • Update the firmware of all ESXi in a cluster
  • Update and Upgrade all ESXi hosts in a cluster together

Just as with VUM, you can download updates and patches from the internet, or you can manually download them for dark sites. Keep in mind to use some of these features, you need to be using vSphere 7 on your hosts. Here is a primer just for those that are new to this or those needing a refresh.

Baseline – this is a group of patches, extensions, or an upgrade. There are 3 default baselines in Lifecycle Manager: Host Security Patches, Critical Host Patches, and Non-Critical Host Patches. You cannot edit or delete these. You can create your own.

Baseline Group – is a collection of non-conflicting baselines. For example, you can combine Host Security Patches, Critical Host Patches, and Non-Critical Host Patches into a single Baseline Group. You then attach this to an inventory object, such as a cluster or a host. You can then check the object for compliance. If it isn’t in compliance, remediation installs the updates. If the host can’t be rebooted, staging the software to it first loads the software and waits to install until a time of your choosing.

In vSphere 7, there are now Cluster baseline images. You set up an image and use that as the baseline for all ESXi 7.0 hosts in a cluster. Here is what that looks like:

In the image, you can see you load an image of ESXi (the .zip file, not ISO), and you can add a vendor add-on and firmware and drivers. Components allow you to load individual VIBs (VMware Installation Bundles) for hardware or features.

From the above, you can deduce that the new Lifecycle Manager will be a great help in managing the host’s software and hardware.

Objective 1.9 – Describe the basics of vSAN as primary storage

vSAN is VMware’s in-kernel software-defined storage solution that uses local storage and aggregates them into a single distributed datastore to be used by cluster nodes. vSAN requires a cluster and hardware that has been approved and on the vSAN hardware compatibility guide. vSAN is object-based, and when you provision a VM, its pieces are broken down into specific objects. They are:

  • VM Home namespace – stores configuration files such as the .vmx file.
  • VMDK – virtual disk
  • VM Swap – this is the swap file created when the VM is powered on
  • VM memory – this is the VM’s memory state if the VM is suspended or has snapshots taken with preserve memory option
  • Snapshot Delta – Created if a snapshot is taken

VMs are assigned storage policies that are rules applied to the VM. Policies can be availability, performance, or other storage characteristics that need to be assigned to the VM.

A vSAN cluster can be a “Hybrid” or “All-Flash” cluster. A hybrid cluster is made up of flash drives and rotational disks, whereas an all-flash cluster consists of just flash drives. Each host, or node, contributes at least one disk group to storage. Each disk group consists of 1 flash cache drive, and 1-7 capacity drives, rotational or flash. A total of 5 disk groups can reside on a node for a total of 40 disks. The cache disk on a hybrid cluster is used for read caching and write buffering (70% read, 30% write.) On an all-flash cluster, the cache disk is just for write buffering (up to 600GB.)

vSAN clusters are limited by vSphere maximums of 64 nodes per cluster but typically use a max of 32. You can scale up, out, or back and supports RAID 1, 5, and 6. Different VM’s can have different policies and different storage characteristics using the same datastore.

Objective 1.9.1 – Identify basic vSAN requirements (networking, disk count, type)

We went over a few of them above but let’s list vSAN’s requirements entirely.

  • 1 Flash drive for cache per disk group– can be SAS, SATA, or PCIe
  • 1-7 drives per disk group – can be SAS, SATA, or PCIe flash
  • 1 GB NIC for Hybrid or 10 Gbe + for all-flash clusters with a VMkernel port tagged for vSAN
  • SAS / SATA / NVMe Controller – must be able to work in pass-thru or Raid 0 mode (per disk) to allow vSAN to control it
  • IPv4 or IPv6 and supports Unicast
  • Minimum of 32 GB RAM per host to accommodate a maximum of 5 disk groups and 7 disks per disk group.

Although typically you need 3 nodes minimum for a vSAN cluster, 4 is better for N+1 and taking maintenance into account. In other cases, 2-node clusters also exist for smaller Remote Branch Office or ROBO installations.

Objective 1.10 – Describe vSphere Trust Authority Architecture

Starting with vSphere 6.7, VMware introduced support for Trusted Platform Module or TPM 2.0 and the host attestation model. TPMs are that little device can be installed in servers that can serve as a cryptographic processor and can generate keys. It can also store materials, such as keys, certificates, and signatures. They are tied to specific hardware (hence the security part), so you can’t buy a used one off eBay to install in your server. The final feature of TPMs is what we are going to use here or determining if a system’s integrity is intact. It does this by an act called attestation. Using UEFI and TPM, it can determine if a server booted with authentic software.

Well, that’s all great, but vSphere 6.7 was view-only; there were no penalties or repercussions if the software wasn’t authentic. What’s changed?

Now, introduced in vSphere 7, we have vSphere Trust Authority. This reminds me of Microsoft’s version of this called Hyper-V Shielded Installs. Essentially you would create a hyper-secure cluster called Host Guardian Service, and then you would have 1 or more guarded hosts and shielded VMs. This is essentially the same concept.

You create a vSphere Trust Authority which can either establish its own management cluster apart from your regular hosts. The better way is to have a completely separate cluster, but to get started, it can use an existing management cluster. They won’t be running any normal workload VMs so they can be small machines. Once established, it has two tasks to perform:

  • Distribution of encryption keys from the KMS (taking over this task for the vCenter server)
  • Attestation of other hosts

If a host fails attestation now, the vTA will withhold keys for it, preventing secure VMs from running on that host until it passes attestation. Thanks to Bob Planker’s blog here for explaining it.

Objective 1.11 – Explain Software Guard Extensions (SGX)

Intel’s Software Guard Extensions or SGX were created to meet the needs of the trusted computing industry. How so? SGX is a security extension on some modern CPUs. SGX allows software to create private memory regions called enclaves. The data in enclaves is only able to be accessed by the intended program and is isolated from everything else. Typically this is used for blockchain and secure remote computing.

vSphere 7 now has a feature called vSGX or virtual SGX. This feature allows the VMs to access Intel’s technology if it’s available. You can enable it for a VM through the HTML5 web client. For obvious reasons (can’t access the memory), you can’t use this feature with some of vSphere’s other features such as vMotion, suspend and resume, or snapshots (unless you don’t snapshot memory).

That ends the first section. Next up, we will go over VMware Products and Solutions, which is a lot lighter than this one was. Seriously my fingers hurt.

Creating a VI (Virtual Infrastructure) Cluster in VCF 4.0.1.1

I originally wanted to learn more about VMware Cloud Foundations but never had the time to. I recently (ahem COVID) found extra time to try new things and learn with my home lab. For the setup, I used the VMware Lab Constructor (downloaded here) to create VCF. After deployment, I then updated it to the latest version (currently 4.0.1.1). This is all running on a single Dell server (PowerEdge XR2) in a nested environment. I don’t believe that has too much bearing on how things are run overall, though it does make it simpler not having to deal with “real” hardware. While not being officially supported by VMware, the VMware Lab Constructor is slick enough to where maybe it should be.

VMware positions VCF as making your environment operationally simpler. While still not point and click, VCF does help immensely with setting up a VMware Validated Design for a datacenter. Using VLC to create my VCF environment even simpler. Normally you would architect how you want it to be set up (networks, VLANs, NSX, etc) and then use the Cloud Builder appliance to “bring up” or deploy infrastructure. Pre-Work includes adding all parameters to an XLS spreadsheet, generating a JSON file, and then using that to fuel the Cloud Builder appliance. The VLC simplifies this by already assigning most of these parameters. Your work isn’t much more than just imputing license keys into the JSON file and pointing the PowerShell script to where the packages and JSON files are.

I had a few hiccups deploying my environment. Specifically, it was choosy on the storage used. Either way, I got it running and well considering it was a nested environment. The hardware backing it is 2x Xeon Gold 6138 20c/40t processors, 384GB RAM, 2TB NVMe drive, 5TB Synology SSD storage, and 10Gbe. I found myself wanting to create additional clusters for infrastructure and more. To do so, apparently, I needed a cluster image. I tried to just set up one from the mgmt. cluster but received an error I couldn’t seem to decipher. The error on the SDDC Manager side was the following:

And if I wanted to import one, I needed all the following:

Seems a lot to do. The extracting a cluster image would be a lot simpler but it appears as you can’t do that unless you have already had one done previously. This guide is assuming you are on VCF 4.0+ and you are running VMware ESXi 7.0 (since this is when this feature became available).

First, you need to download the .zip deployment image of ESXi you want to use. The one I am using is VMware-ESXi-7.0b-16324942-depot.zip. You cannot use ISO images for a baseline image.

Go to the Lifecycle Manager and click on Actions. Select either Sync Updates or if you are using the .zip file, Import Updates, and select the .zip you have downloaded.

You should now see the image show up in ESXI Versions. You can now add any Vendor Addons or extra components to it.

Create a blank cluster in vCenter. Name it something such as NewCluster and at the bottom, there will be a checkbox to Manager all the Hosts with one image. Click on that and select the image you wish to use.

Next, you need to export the cluster image specifications and components. To do this, select the new cluster, and click on the updates tab. Then click the ellipsis under Image and choose Export.

You need to perform these steps 3 times. One for the JSON file, one for the ISO file, and one for the .zip file formats. Do not rename these files.

Now we need to download the cluster settings JSON file. From the Menu in the HTML5 Web Client, select Developer Center and then select API Explorer tab

Expand the cluster section, then expand the GET /rest/vcenter/cluster and scroll down a bit to click Execute

There will be a Response that appears. Click on the vcenter.cluster.list_resp and then click on the cluster you created.

Copy that cluster-ID – in this case, domain-c6003 then go to the Select API change to esx

Scroll down to /settings/cluster/software and then expand the GET for /api/esx/settings/cluster/{cluster}/software and enter in the value of the cluster from before.

Click on Execute and then Download. The response-body. JSON file is downloaded to your local computer. These are all the files you need.

Return to the SDDC-Manager and return to Image Management

Input the correct file with the correct prompt.

Cluster Settings = response-body.json
Software Spec = SOFTWARE_SPEC_xxx.json
Zip = Offline_Bundle_xxxx.Zip File you downloaded
ISO = ISO you downloaded

Click Upload Image Components. Grab a beverage of choice as this might take a while. When done you will get a message on the bottom telling you the files are uploaded.

You will also notice an entry under Available Images now.

This time when you create the VI cluster you see something a little different.

And there you go…

VMware Cloud Foundations 4.0.1: Problems with SDDC Manager refreshing

I’ve been doing some studying on VMware Cloud Foundations 4.0.1 and have it running in my lab. It seems a bit finicky at times I’ve noticed. One of the issues I’ve run into so far is that when I added 3 more hosts, everything seemed to be fine. I then wanted to add a third NIC to the hosts in order to access ISCSI storage on them. When I create the NIC though (while the nested host was turned on) it locked up my physical host and ended up needing to reboot it. Not nice….

Anyways I got that sorted. The next issue I ended up with was SDDC manager didn’t want to refresh or connect to vCenter since it wasn’t shut down properly. I started doing some research and HUGE shoutout to vSAM.pro for figuring out what was going on. I ended up having to do a bit more though. So here is what I ended up doing (and his blogs made much more sense after I figured it out weirdly enough 🙂 )

The Issue:

First, I Checked logs. These logs are located at

/var/log/vmware/vcf/

Underneath there is a bunch of service folders with logs in them. In this particular case I checked through most of the logs and found the following issue

root@sddc-manager [ /var/log/vmware/vcf/operationsmanager ]# tail operationsmanager.log

2020-08-03T22:57:50.268+0000 INFO [0000000000000000,0000] [liquibase.executor.jvm.JdbcExecutor,main] SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-08-03T22:57:50.270+0000 INFO [0000000000000000,0000] [l.lockservice.StandardLockService,main] Waiting for changelog lock….
2020-08-03T22:58:00.270+0000 INFO [0000000000000000,0000] [liquibase.executor.jvm.JdbcExecutor,main] SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-08-03T22:58:00.273+0000 INFO [0000000000000000,0000] [l.lockservice.StandardLockService,main] Waiting for changelog lock….
2020-08-03T22:58:10.273+0000 INFO [0000000000000000,0000] [liquibase.executor.jvm.JdbcExecutor,main] SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-08-03T22:58:10.276+0000 INFO [0000000000000000,0000] [l.lockservice.StandardLockService,main] Waiting for changelog lock….
2020-08-03T22:58:20.277+0000 INFO [0000000000000000,0000] [liquibase.executor.jvm.JdbcExecutor,main] SELECT LOCKED FROM public.databasechangeloglock WHERE ID=
2020-08-03T22:58:20.278+0000 INFO [0000000000000000,0000] [l.lockservice.StandardLockService,main] Waiting for changelog lock….
2020-08-03T22:58:30.279+0000 INFO [0000000000000000,0000] [liquibase.executor.jvm.JdbcExecutor,main] SELECT LOCKED FROM public.databasechangeloglock WHERE ID=1
2020-08-03T22:58:30.281+0000 INFO [0000000000000000,0000] [l.lockservice.StandardLockService,main] Waiting for changelog lock….

It looked like there was an issue with a locked file somewhere.

In order to check the database, you need to go to the DB command line

psql –host=localhost -U postgres

Then you need to change to the database that is locked. You can list the DBs by typing

\l (That is an L)

To change databases – type the following

\c [database_name] without the brackets

This will allow you to run the command

\dt

Which shows tables for that DB in this case of the OperationsManager it looked like this:


Next you need to see if there is something in that table so type the following:

select * from databasechangeloglock

And it will return the following:


You can now delete this table by doing typing the following

delete from databasechangeloglock;

And it will kill the lock. Give the SDDC manager a few minutes and it should start working again.


VCP 2019 Study Guide Section 7 (Final Section)

Section 7 – Administrative and Operational Tasks in a VMware vSphere Solution

Objective 7.1 – Manage virtual networking

I’ve gone over virtual networking a bit already. But there are two basic types of switches to manage in vSphere. Virtual Standard Switches and Virtual Distributed Switches. They both have the same components. Virtual Ports Groups, VMkernel Ports, and Uplink Ports. Here is a diagram depicting how it might look on a host

VMkernel ports are used for management purposes. When you set it up, you can choose using it for the following purposes

  • vMotion – this is used to migrate VMs
  • Provisioning – used for VM cold migration, cloning, and snapshot migration.
  • Fault Tolerance logging – enables Fault Tolerance logging on the host (you can only have one per host)
  • Management – management communication between hosts (should have minimum of two for redundancy)
  • vSphere Replication – Handles outgoing replication data sent to the vSphere Replication Server
  • vSphere Replication NFC – Handles incoming replication data on the target replication site.
  • vSAN – allows for vSAN traffic, every host that is part of a vSAN cluster must have one.

VM Port Groups are for VM network traffic. Each of the VMs have a virtual NIC which will be part of a VM port group.

Uplink ports are connected to physical NICs. A Virtual Distributed Switch will have an uplink port group that physical NICs from multiple hosts.

You can manage your networking from a few locations in the HTML5 client. You can also manage hosts from the host HTML5 client. In the HTML5 client you manage networking from Host > Configure > Networking shown here.

You can then change manage the components as needed. If you need to manage a Virtual Distributed Switch you can do that there as well or you can create a VDS on the networking tab in the navigation pane.

You can configure shares and other settings here as well as you can see. You can find more info here if needed.

There is also managing the virtual networking of the VM. If you right click on the VM and then select Edit Settings. You can edit the networking adapter type and what virtual network the VM is connected to.

You can also migrate multiple VMs to another network if you go to the network tab in the navigation pane. Clicking the following will pop up a wizard.

In the wizard you select the destination network.

Then you select all the VMs you want to migrate.

Then you complete it.

Objective 7.2 – Manage datastores

Datastores are logical storage units that can use disk space on one disk or span several. There are multiple types of datastores:

  • VMFS
  • NFS
  • vSAN
  • vVOLs

To manage them, you can navigate to the Datastores tab on the navigation pane and select the datastore you want to manage. Then click on Configure on the object pane in the middle.

From this screen you can increase the capacity. Enable SIOC, and edit Space Reclamation priority. Using the Connectivity and Multipathing, you can edit what hosts have access to this datastore. You can also see what files and VMs are on this datastore. You can perform basic file functions through this as well.

To dig a little deeper though. How did we get here? How do we see the original device? To do that we have to go back to the host configuration. There we look at two main things. Storage Adapters and Storage Devices


This will show us what our host is able to get to. If we don’t have access to something we may need to either add it if it’s ISCSI or NFS or Protocol Endpoint if its a vVOL. Once we can see the RAW device or we have finished setting up the share or protocol endpoint, we can right click on a host and select Storage > New Datastore. This pops up a wizard that looks like this

The next screen will allow us to give the datastore a name and what device we want to use for it. Then we choose a VMFS version. We would choose 5 if we still had older hosts running older vSphere. We would choose 6 if we had all 6.5 or 6.7. Why would you want to use it? Look here for a nice table. You can then partition it if desired and finish.

Objective 7.3 – Configure a storage policy

  1. To create a storage policy, click on the Menu drop down at the top of your HTML5 client and choose Policies and Profiles

  2. Click on VM Storage Policies

  3. Select Create VM Storage Policy and on the popup wizard, give it a name.

  4. This screen allows you to choose between Host Based Services or Datastore Specific rules. Host based are specific services that particular host may provide such as caching, encryption, etc. These can be used in conjunction with Datastore specific rules which are directed to specific datastores. Such as I tag a specific datastore as “Gold” storage and I create a Storage policy that requires a VM to use “Gold” storage. I am going to use the tag-based placement option.

  5. I have already created a Tag category called Storage Type and I am going to tell it to Use storage tagged with the “Gold” tag. I could tell it to not use that tag as well. Multiple Rules can be used at the same time.

  6. I have one Datastore tagged as “Gold” Storage.

  7. That’s it. Click Finish and you have created a Storage Policy. Just to show you what host based services might look like here is a screenshot

Objective 7.4 – Configure host security

There are several built-in features that can secure a host. Let’s go over them

  • Lockdown Mode – When enabled this prevents users from logging directly into the host. It will only be accessible through the local console if you are on an accepted user list or vCenter. You can also turn off the Direct Console UI completely. This can be found under Configure > Security Profile

  • Host Image Profile Acceptance Level – This is like driver signing on a Microsoft Windows machine. This will only allow bundles or drivers with an acceptance level you set.
  • Host Encryption Mode – This setting encrypts any core dumps from the host.
  • Firewall – There is a stateless firewall included in ESXi. Most ports are locked by default. If you want to add a new port not already in the list you will need to do it at command line.

Objective 7.5 – Configure role-based user management

Role-based management allows you to assign a set of permissions to a user or group. This is great as this makes it easier to assign just the permissions you need to a user and no more. This is great for security. VMware provides a number of Roles pre-configured. These can’t be changed. What you can do, is clone them and change the clones. You can also create your own custom role. In order to do this, you click on the Menu and go to Administration

You can see the predefined roles when you select Roles under Access Control

To clone you select one and then click the Clone icon

You need to name it and click ok on the window the pops up. To edit the clone you just made, click on the Pencil icon after selecting the new role. Then select the privileges you want to allow or disallow by clicking on the check boxes.

You can see the privileges already assigned to a role by clicking on the Privileges button on the side.

You then assign the roles under the Global Permissions item. You can use one of the built-in user or groups or you can add a new user/group. You can add the group from any of the Identity sources you have setup already.

When you add or edit the permissions you set the role.

There is a special role called No Access as well that you can assign to a user to keep them from accessing specific objects or privileges.

Objective 7.6 – Configure and use vSphere Compute and Storage cluster options

After you create a cluster, you can right click on it and select settings, or click on the configure tab in the center, object pane

Quickly going through the options available. There is DRS and HA we’ve already gone over. We then have:

  • QuickStart – is a wizard to help you configure your cluster.
  • General – lets you change the swap file location for your VMs. This will be the default setting for the cluster. Default VM compatibility is the default VM Hardware version for the cluster.
  • Licensing – This is only used if you vSAN
  • VMware EVC – This was mentioned previously as well. Enhanced vMotion Compatibility. This allows you to use disparate versions of processors and vMotion between them.
  • VM/Host Groups – This is the VM Groups and Host groups you can setup to create Affinity or Anti-Affinity rules
  • VM Host Rules – These are the Affinity or Anti-Affinity rules.
  • VM Overrides – This allows you to override cluster settings for DRS/HA restart or response for individual VMs.
  • Host Options – Allows for host power management. You enter in your IPMI settings per Server
  • Host Profile – This will be gone over in a few objectives, but creates a settings template for all hosts in the cluster.
  • I/O filters – You can install I/O filters here (VAIO) This can be a plugin such as backup or disaster recovery filters.
  • Alarm Definitions – This is where you can add/enable/disable/delete alarms for your cluster (applies to objects in the cluster)
  • Scheduled Tasks – You can schedule certain tasks for off hours. New Virtual Machine, Add Host, or Edit DRS.
  • vSAN – This won’t say much here unless it’s turned on.

A Datastore Cluster or Storage Cluster (unless referring to VSAN cluster) is created by right-clicking on the datacenter in the Storage heading on the object pane.

  1. This launches a wizard to go through. You will need to enter a Datastore Cluster name and you should turn on Storage DRS
  2. You then are presented with more options than anyone should be. The first is what level of automation would you like, but then you have all these other options which I will leave at cluster default. Each one of them will check certain metrics or alarms and move the VM storage based on what it sees.
  3. Now you need to decide storage DRS runtime settings. These are thresholds you set before it takes action to move data around. I’m leaving defaults again.
  4. You then select your cluster and / or hosts that will participate in sharing their datastores in this.
  5. Select the datastores that will make up this Datastore cluster
  6. It gives you final summary screen and you click Finish.

Objective 7.7 – Perform different types of migrations

We’ve already gone over the types of migrations possible. Now let’s see how to accomplish them.

  1. To migrate a VM, whether you migrate the VM or storage, you need to right click on the VM and choose Migrate.
  2. You are given the option of 3 types of migration. vMotion = Compute resource only, svMotion = Change storage only, or enhanced or xvMotion is both. The screens after depend on which you choose here. I will choose both so you see both screens.
  3. For the compute resource to migrate to, I need to choose either a cluster, or individual host. A handy little tidbit that’s nice is the upper right-hand corner. VM origin tells you where this VM is sitting right now, both host and datastore.
  4. Select storage next.
  5. Next, select the network for this VM to use.

  6. vSphere gives a summary, click Finish and it will migrate.

Objective 7.8 – Manage resources of a vSphere environment

There are several resources that can be managed in a vSphere environment. There are mechanisms built-in to vSphere to allow that. You can create resource pools, assign shares for CPU, memory, disk, and network resources. You can also create reservations and limits. Let’s define a few of those and how they work.

  • Reservations – this is the amount of the resource that is guaranteed. If the resource can’t be given, the VM will not power on.
  • Limits – are the maximum amount of that resource you will allow for that VM. The issue with limits is if you have extra resources vSphere will still not allow that VM to have more resources.
  • Shares are used to compete for the resources between. Shares will only come into play when there is contention for it. During regular periods when all the VMs are happy and there is plenty of resources, shares don’t matter.

Resource Pools can also be created to slice off resources. You can have reservations on Resource Pools as well, but you can do a bit more. You can have expandable reservations to borrow resources from its parent if it needs to. This is what you need to configure when you create a CPU and Memory Resource Pool



You can also assign this on an individual VM basis

To assign disk shares you can look at the individual VM

You can also assign shares and manage network resources on Virtual Distributed Switches with Network I/O Control enabled.

Objective 7.9 – Create and manage VMs using different methods

There are several methods to create VMs. You can:

You can also deploy from an OVF template, use the OVF Tool or create a VM from a physical using the P2V tool. For the purposes of the exam they more than likely just want you to know about the ones in the picture and deploying from an OVF template.

You can manage VMs through the HTML5 client, API, PowerCLI (PowerShell) or even through the ESXi host console. There are even some options you can only do using PowerCLI. Creating a new VM via PowerCLI isn’t hard either, it can be done with command like the following:

New-VM -Name ‘TestVM’ –VMHost ‘VMHost-1’ -Datastore ‘TestDatastore’ -DiskGB 40 -MemoryGB 8 -NumCpu 2 -NetworkName ‘Virtual Machine Network’

That creates a new VM with the name TestVM on VMHost-1 storing its 40GB VMDK on the TestDatastore. A lot simpler than going through a long wizard to me.

Objective 7.10 – Create and manage templates

Templates are VMs that have been converted so that they can’t be turned on. They are used as base server machines or VDI base workstations. Creating them is a simple process. You can do this with a running VM by cloning it (creating a copy) and making the copy a Template. If you want to convert the machine you are working on, it will need to be turned off. I will go over both ways to do this.

  1. Right click on the VM to be converted. We will start with a running VM.

  2. Give the VM Template a name

  3. Choose a location for the template

  4. Choose storage for the template

  5. Complete by clicking Finish.

For a machine that is turned off you can clone it as well, but you have the option of turning that VM into a template. To do that:

  1. Right click on the VM you want to change to a template.

  2. If you choose Convert to template, it asks you if you are sure and then does it. If you Export OVF this will save an OVF file to your desktop that is the VM in template format that you can import like an appliance.

Objective 7.11 – Manage different VMware vCenter Server objects

I’ve gone over how to manage different types of objects so I will take a stab here and guess that they are referring to the actual vCenter Server objects and not clusters, hosts, etc.

To manage the vCenter Server object, there is a couple of places to go to. The first is Administration > System Configuration. This location will allow you to export a support bundle, converge an external PSC to embedded, and decommission PSC. Oh, you can also reboot it.

The next place you can configure the vCenter is by clicking on the vCenter in the navigation pane and then go to the configure tab in the object pane. You can see that here

This is just changing the settings on the vCenter server itself and not the object.

If anyone has a thought on what they may be looking here if I didn’t cover it, reach out to me.

Objective 7.12 – Setup permissions on datastores, clusters, vCenter, and hosts

Permissions can be set on most objects in the vSphere environment. To do that you need to navigate to the Permissions tab in the object pane. Here is an example

You can see how you can assign permissions to it. Click on the ‘+’ in order to add another user or group to it. You can also edit an existing permission by clicking on the pencil icon. You can also propagate this permission to its children with the Propagate to children checkbox.

If a user has conflicting permissions, the explicit permissions will win over general. This allows you to assign a user “No Access” to an object and it will win over having group rights to it. The user documentation has this really well. (From the VMware Documentation here)


If multiple group permissions are defined on the same object and a user belongs to two or more of those groups, two situations are possible:

  • No permission for the user is defined directly on the object. In that case, the user has the privileges that the groups have on that object.
  • A permission for the user is defined directly on the object. In that case, the user’s permission takes precedence over all group permissions.

Objective 7.13 – Identify and interpret affinity/anti affinity rules

Affinity and Anti-Affinity rules exist on a DRS enabled cluster. They are typically used for the following reasons:

  • Affinity Rules – Used for multi-tier app VMs or other VMs that communicate heavily or depend on each other in order to run. It can also be used to keep a VM running on a specific host for licensing or other purposes.
  • Anti-Affinity Rules – Use to keep VMs separate from each other or keep them from running on separate hosts.

These rules can be setup as “Must” rules or “Should” rules. Just like it sounds the Must will prevent the machines from doing what is instructed and if they can’t comply with the rule they won’t start. The Should rules will try everything they can to comply but for example, you are down to one host, the machines will still run there as that is their only option.

You create groups that are made up of either VMs or hosts and then create a rule that defines the relationship between them. You set them up underneath the Configure tab under your cluster. Here is what that looks like:

You would create the VM and/or host groups. Then you create the rules that will govern them.

Objective 7.14 – Understand use cases for alarms

Use cases for alarms are plentiful. You don’t want errors and issues happening in the background without you knowing. Even better, it would be great to get notice of these events before they happen. That is what alarms can do for you. They can notify you in response to events or conditions that occur to objects in your vSphere environment. There are default alarms setup for hosts and virtual machines already existing for you. You can also setup alarms for many objects. An alarm requires a trigger. This can be one of two things.

  • Condition or State. This is monitoring the condition or state of an object. And example of this would be a datastore is using 80 percent of its storage. Or a host is experiencing high CPU usage.
  • Event. This would be something like a host hardware changes, or leaves a cluster.

You can setup an alarm by right clicking on the object and then click on Alarms > New Alarm Definition.

Objective 7.15 – Utilize VMware vSphere Update Manager (VUM)

VUM (vSphere Update Manager) is VMware’s server and management utility to patch and upgrade its software. While there were many requirements to get VUM working on previous versions of vSphere, in 6.7 its pretty easy. Though its not completely simple, it does make more sense once you use it for a little bit. First, we need to define a few terms.

Baseline – is one or more patches, extension or upgrade that you want to apply to your vSphere Infrastructure. You can have dynamic patches or static. Dynamic baselines will automatically download and add new patches. I don’t necessarily recommend this as you don’t know how a patch will affect your environment without testing. Now if it’s a test environment go for it! VMware includes two dynamic baselines for patches predefined for you. You can create your own.

Baseline Group – Includes multiple baselines. The pre-defined ones are Non-Critical and Critical Patches. Unless one causes an issue, it would be good to have both of those. I created a group that includes both called Baseline Group 1.

You can create a baseline that includes an upgrade say from 6.5 to 6.7 as well. There are settings that go along with this service and here is what they look like.

  • Administration Settings
    • Patch Downloads concerns itself with getting your updates.
    • Patch Setup concerns itself with where it is getting them from. Do you need a proxy?
    • Recall Notification. Occasionally VMware needs to recall a patch that isn’t up to par. This setting will notify you there is a recall and what it is and make sure it doesn’t apply that patch to any hosts.
    • Network Connectivity. Connectivity for VUM. Mainly port numbers and host name.
  • Remediation Settings
    • Hosts – When you apply the baselines to a host, what do you want it with the VMs, host if it uses PXE to boot, and retries.
    • VMs – If you are remediating VMs do you want to take a snapshot automatically and how long do you want to keep them.

The setup of the server is just the first step though. You now need to get these patches to the hosts and VMs. You have two options when you apply them. You can Stage, or Remediate. Stage will just load the patches on it and wait for you to tell it to take action. Remediate takes immediate action. You can do this by going to the update tab for the object. Here is the update for the cluster.

At the bottom you notice I attached the baseline. This is needed to stage or remediate your hosts and VMs. You can then check them by Checking Compliance. You may also notice you can update VMware Tools and VM Hardware versions en masse. (may require VM reboot)

Objective 7.16 – Configure and manage host profiles

Host profiles provide a mechanism to automate and create a base template for your hosts. Using host profiles, you can make all your hosts exactly the same. VMware will inform you if your host is not in compliance yet and then you can take steps to remediate it.

You access it under Policies and Profiles

There is a process to it. Here it is:

  1. Click on Host Profiles on the navigation pane on the left.

  2. Next is Extract Host Profile. This is going to be taking a host you select and that will be the “baseline”

  3. This will pop up a wizard. This is where you select the host.

  4. Give it a name and a description and then Finish

  5. Once that is done, you now have a window that looks like this

  6. Yes, its small. The point is when you click on the host profile you now have additional options above. Notice as well that the profile is also a hyperlink. Click on it.


  1. Click on the Actions to attach to hosts or clusters.

Conclusion

So that is the end of this study guide. If you find something incorrect in it or I didn’t understand the Blueprint from VMware, let me know. I appreciate you taking the time to read through and hope you were able to use it. I really appreciate the community and all the things its done for me, which is why I love doing things like this. Thanks!!

Mike Wilson (IT-Muscle.com / @IT_Muscle )

VCP 2019 Study Guide Section 5

Section 5 – Performance-tuning and Optimizing a VMware vSphere Solution

Objective 5.1 – Determine effective snapshot use cases

Many companies use the term snapshot. There are numerous definitions for snapshots that vary on the company. We should first define what VMware does with snapshots.

VMware preserves a Point in Time or PIT for a VM. This process freezes the original virtual disk and creates a new Delta disk. All I/O is now routed to the Delta disk. If data is needed that still exists on the original disk it will need to go back to that to retrieve data. So now you are accessing two disks. Over time you can potentially double the size of the original disk as you make changes and new I/O. The original 10 GB disk becomes 20 GB over 2 disks. If you create more snapshots, you create new Delta disks and it continues.

Now that we understand a bit more about them, we see the limitations inherent. This tool was never meant to be a backup. It was designed to be used for reverting back to the original (if needed) after small changes. Most backup tools DO use snapshots as part of their process, but it is only used for the amount of time needed to copy the data off and then the snapshot is consolidated back again. Here are a few Best Practices from VMware on how to use them.

  • Don’t use snapshots as backups – major performance degradation can occur and I have seen people lose months of data or more when the chain got too long.
  • 32 snapshots are supported, but it’s better not to test this.
  • Don’t use a snapshot longer than 72 hrs.
  • Ensure if you are using a 3rd Party backup that utilizes the snapshot mechanism, they are getting consolidated and removed after the backup is done. This may need to be checked via CLI
  • Don’t attempt to increase disk size if the machine has a snapshot. You risk corrupting your snapshot and possible data loss.

Most use cases involve you changing the VM or upgrading and once you find out it does or doesn’t work, you remove the snapshot. A good example of this is Microsoft Windows Updates. Create a snapshot, install the updates and test. If the updates haven’t broken anything, consolidate. Another use case might be installation or upgrade of an important program. Or a Dev use case of changing code and executing to determine if it works. The common thread between all the use cases is temporariness. These use cases are for snapshots running a very short period of time.

Storytime. I had a company that called in once that was creating snapshots for their Microsoft Exchange Server. They were taking one every day and using it as a backup. When I was called, they were at about a year of snapshots. Their server wasn’t turning on and trying to remove the snapshots wasn’t working. Consolidation takes time and a bit more space. We tried to consolidate but you can only merge 32 snapshots at a time. They got impatient about 25% through the process and tried to turn it on again. When that didn’t work, they had to restore from tape backup and lost a decent amount of data.

Objective 5.2 – Monitor resources of VCSA in a vSphere environment

Monitoring resources can be done from more than one place. The first place is in the vCenter appliance management page at :5480. After you log into it, you have the option on the navigation pane called Monitor. This is what it looks like:



Notice the subheadings. You can monitor CPU and Memory, Disks, Networking, and the database. You can change the time period to include metrics up to the last year. Since the VCSA is also a VM, you can view this from inside the vSphere HTML5 client. This view allows you to get a bit more granular. You are looking at it from the hosts perspective whereas the Appliance Management page is from within the VM. Both are important places to give you a full look at how the vCenter is performing. Here is a screenshot of inside the HTML5 client of my vCenter Appliance.



You can attach to the vCenter via SSH or console and run TOP for a per process view of the appliance. Here is what that looks like



These are the most common ways you would monitor resources of your VCSA.

Objective 5.3 – Identify impacts of VM configurations

There much to unpack with this objective. I will work through best practices and try to stay brief.

  • While you want to allocate the resources that your VM needs to perform, you don’t want to over-allocate as this can actually perform worse. Make sure there are still enough resources for ESXi itself as well.
  • Unused or unnecessary hardware on VM can affect performance of both the host and all VMs on it.
  • As mentioned above, over allocation of vCPU and memory resources will not necessarily increase performance and it might lower it.
  • For most workloads, hyperthreading will increase performance. Hyperthreading is like a person trying to eat food. You have one mouth to consume the food, but if you are only using one arm to put the food in, it isn’t as fast as it could be. If you use both arms (enabling hyperthreading) you still only have one mouth (one core) but you aren’t waiting for more food and just keep constantly chewing. Certain workloads that keep CPU utilization high, benefit less from hyperthreading.
  • Be aware of NUMA (Non-Uniform Memory Access). Memory is “owned” by sockets. If you use more memory than that socket owns, you need to use memory from the other socket (if available). This causes a small delay because it has to move across the bus vs right next to the processor. This can add up. (Oversimplified but the idea is there). There are policies that can be set that could help if needed. Not in the scope of this certification though.
  • Not having enough physical memory can cause VMs to use slower methods of memory reclamation all the way to disk caching. This causes performance degradation.
  • Creating shares and limits on your machine may not have the result you believe. Weigh those options carefully before you apply them.
  • Make sure you use VM Tools in your VMs as they add a number of useful and performance increasing solutions.
  • The hardware you use in the configuration can also change performance. For example, using PVSCSI vs LSI SAS or using VMXNET vs E1000 NICs can make a decent performance jump.
  • Make sure you use VMware snapshots how they were intended and not for long periods of time.
  • There are different types of VMDKS you can create. They include thin provisioned, thick (lazy zeroed), and thick (eager zeroed). There are reasons you might utilize them. Thin disks are the best in a scenario where you may not have all the space yet. You may need to buy more disks or they may be already on their way. Eventually you will have this space. It is important that you monitor your space to make sure you don’t consume it before you have it. If you do, the VM will be suspended best case, worst case you can lose data. Thick (lazy zero) is when you fence all that space off for that disk up front. You can’t over-provision this, you have to already have the disk space. The “lazy zero” comes in play when you go to use the space. VMware will need to format the disk block before using it. This can potentially be a slow down if there are a high number of writes to the disk. If the VM is more read heavy, you are just fine. Thick (eager zero) will take more time to create, because it formats the whole disk up front before use. This type if best for a VM with heavy writes and reads such as a DB server etc.

Keep these in mind when creating VMs and also take a look at the VMware Performance Best Practices guide here.

VCP 2019 Study Guide – Section 2

Section 2 – VMware Products and Solutions

Objective 2.1 – Describe vSphere integration with other VMware products

VMware has just a few products on the market (/sarcasm), and they show no letup in acquiring other companies and expanding to new technologies. One thing I appreciate about them is their ability to take what they buy, make it uniquely theirs, and integrate it with their current solutions. While this is not always done quickly and it make take a few versions, it usually pays dividends. Other products such as their Software Defined Networking product, NSX-V and T, and vSAN (SDS storage) and more, round out their offerings making it a complete solution for their customers. While definitely not altruistic, having a single place to get a complete solution can make life easier. Let’s look at some of the VMware products that are commonly used with vSphere core products.

If you look at products grouped together on VMware’s download site, you’ll see the core vSphere products of ESXi and vCenter. You also see Log Insight, NSX, Operations, and Orchestrator. I will try to give you a high-level of each of those products and how they fit into the vSphere world.

vRealize Log Insight

vRealize Log Insight is a syslog server on steroids. It is described as a Log Management and Analytics Tool by VMware. It integrates with vCenter Server and vRealize Operations. Log Insight can be used as a regular syslog server for other solutions not in VMware. Using it as a single logging repository and being able to search across your entire company’s infrastructure is its true superpower. But wait… there’s more.

You can also load content packs to manage specific solutions. One example of this is I am using a specially created Rubrik content pack that allows me to create specific dashboards to monitor my backups. Log Insight has the ability to have multiple users and assign them separate permissions to create their own dashboards and metrics. You can see my walkthrough on Log Insight (albeit 4.3 instead of 4.6) here. I also have a few videos to show you how you might customize dashboards here and how you can track a error in the logs here.

VMware NSX

What VMware did for Server hardware they did with Networking as well. While ESXi and vCenter Server already have VSS and VDS, this is the next step in networking evolution. Using NSX you can implement normally difficult configurations such as micro-segmentation in your datacenter with ease. Being able to do this all from a single UI makes it easy and saves time. Once the initial configuration of the physical networking is done, everything thereafter can be accomplished in VMware’s HTML5 client. Creating switches, routers, load balancers, firewalling, you name it.

Because NSX’s technology, ESXi essentially believes it is on a large L2 network allowing you to do things impossible before, such as vMotion over large geographic distances. NSX brings a lot to the table. There is a lot to learn about it, however and it has its own certification track.

vRealize Operations

vRealize Operations is a tool used to facilitate performance optimization, capacity management, forecasting, remediation, and compliance. It integrates right into the HTML5 client and keeps you constantly aware of how your environment is performing. Not only does vRealize Operations integrate with ESXi and vCenter, it also integrates with NSX and Log Insight. Here is a pic of what it looks like in the HTML5 client

I also have a few videos on how to perform actions in vSphere Operations here. While this is an old version it serves well to show you some of the things you can use vRealize Operations for.

You have a large number of dashboards to choose from and monitor. You can see things like disk usage and capacity graphically making it easy to pick out potential problems at a quick glance. Doing this paper vRealize notified I’ve been running my Plex Server on a snapshot for a long period of time… I didn’t have any idea until it told me. (Snapshot was created by Update Manager upgrade). Short story, you need this in your life.

vRealize Orchestrator

Most people know about the app IFTTT for your phone. This is kind of like that but way more powerful. Using vRealize Orchestrator you can create workflows that can perform a plethora of different tasks. It also integrates with vRealize Automation to create even more complex jobs. Using vRealize Orchestrator, you can:

  • Configure software or virtual hardware
  • Update databases
  • Generate work order tickets
  • Initiate system backups

And much more. This integrates with all of VMware’s other products and is a drag and drop worklflow solution.

Objective 2.2 – Describe HA solutions for vSphere

We already went over this, but we’ll touch on it again. The main High Availability solutions VMware provides are vMotion, svMotion and HA using clusters. I will include both HA parts so that you can read about HA in one fell swoop.

High Availability

HA works by pooling hosts and VMs into a single resource group. Hosts are monitored and in the event of a failure, VMs are re-started on another host. When you create a HA cluster, an election is held and one of the hosts is elected master. All others are slaves. The master host has the job of keeping track of all the VMs that are protected and communication with the vCenter Server. It also needs to determine when a host fails and distinguish that from when a host no longer has network access. HA has other important jobs. One is determining priority and order that VMs will be restarted when an event occurs. HA also has VM and Application Monitoring. Using this prompts HA to restart a VM if it doesn’t detect a heartbeat received from VM Tools. Application Monitoring will do the same with heartbeats from an application. VM Component Monitoring or VMCP allows vSphere to detect datastore accessibility and restart the VM if a datastore is unavailable. One last thing to note. In the past, VMware tried to trick people by using the old name for HA which was FDM or Fault Domain Manager

There are a several configuration options to configure. Most defaults work without drama and don’t need to be changed unless you have a specific use case. They are:

  • Proactive HA – This feature receives messages from a provider like Dell’s Open Manage Integration plugin. Based on those messages HA will migrate VMs to a different host due to possible impending doom of the original host. It makes recommendations in Manual mode or automatically moves them in Automatic mode. After VMs are off the host, you can choose how to remediate the sick host. You can place it in maintenance mode, which prevents running any future workloads on it. Or you could put it in Quarantine mode which allows it to run some workloads if performance is low. Or a mix of those with…. Mixed Mode.
  • Failure Conditions and responses – This is a list of possible host failure scenarios and how you want vSphere to respond to them. This is better and gives you way more control then in the past.
  • Admission Control – What good is a feature to restart VMs if you don’t have enough resources to do so? Not very. Admission Control is the gatekeeper that makes sure you have enough resources to restart your VMs in the case of host failure. You can ensure this a couple of ways. Dedicated failover hosts, cluster resource percentage, slot policy, or you can disable it (not good unless you have a specific reason). Dedicated hosts are dedicated hot spares. They do no work or run VMs unless there is a host failure. This is the most expensive (other than a failure itself). Slot policy takes the largest VM’s CPU and the largest VM’s memory (can be two different VMs) and makes that into a “slot” then it determines how many slots your cluster can satisfy. Then it looks at how many hosts can fail and still keep all VMs powered on based off that base slot size. Cluster Resources Percentage looks at total resources needed and total available and tries to keep enough resources to permit you to lose the number of hosts you specify (subtracting amount of resources of those hosts). You can also override this and set aside a specific percentage. For any of these policies, if the cluster can’t satisfy resources for more than existing VMs in the case of a failure, it prevents new VMs from turning on.
  • Heartbeat Datastores – Used to monitor hosts and VMs when the HA network as failed. It determines if the host is still running or if a VM is still running by looking for lock files. This automatically uses at least 2 datastores that all the hosts are connected to. You can specify more or specific datastores to use.
  • Advanced Options – You can use this to set advanced options for the HA Cluster. One might be setting a second gateway to determine host isolation. To use this you will need to set two options. 1) das.usedefaultisolationaddress and 2) das.isolationaddress[…] The first specifies not to use the default gateway and the second sets additional addresses.

There are a few other solutions that touch more on Fault Tolerance and Disaster Recovery.

Fault Tolerance or FT creates a second live shadow copy of a VM. In the even the primary goes down, the secondary kicks in and it then creates a new shadow VM.

Disaster Recovery options include vSphere Replication and Site Recovery Manager. Both of these can be used in conjunction to replicate a site or individual VMs to another site in case of failure or disaster.

Objective 2.3 – Describe the options for securing a vSphere environment

There are a number of options available to secure your vSphere environment. We will start with ESXi and move on to a few others.

ESXi Security

  • Limit access to ESXi – this goes for both the physical box but also any other way of accessing it. SSH, DCUI, or remote console via IPMI or iDRAC/iLO etc. You can also take advantage of lockdown modes to limit access to just vCenter.
  • Use named users and least privilege – If everyone is root than no one is special. Only give users that need it, access. Even then only give them the access and rights they need to do their job. Make sure they all log in as the user you give them. This allows for tracking and accounting.
  • Minimize open ports – your ESXi host has a stateless firewall but if all the ports are open, it’s not providing any protection for you.
  • Smart Card authentication – ESXi now supports smart cards for logging on instead of user name and passwords.
  • Account lockouts – After a number of incorrect tries to log in, have the account lock.
  • Manage ESXi certificates – While there is a Certificate Authority in vCenter, you might want look into using third-party or enterprise CA certificates.
  • VIB Integrity – try to use and only allow your ESXi hosts to accept VMware accepted or VMware Certified VIBs.

vCenter Server Security

  • Harden all vCenter host machines – make sure all security patches and the host machines are up to date.
  • Assign roles to users or groups – This allows you to better keep track of what users are allowed to do if they are part of a role.
  • Setup NTP – time stamps will be accurate and allow you to better track what is going on in your environment.
  • Configure Single Sign On – Keep track of the identity sources you allow to authenticate to your vSphere environment.
  • vCenter Certificates – remove expired or revoked certificates and failed installations.

VM Security

  • Protect the guest operating system – Keep your OS up to date with patches and any anti-malware or anti-spyware. Most OSs also have a firewall built-in. Use that to keep only necessary ports open.
  • Disable unnecessary functionality – Turn off and disable any services not needed. Turn off things like HGFS (host-guest filesystem) that allows you to copy and paste between the VM and remote console.
  • Use templates and scripted installations – After you spend all the time making an OS secure, use that as a template so that you don’t have to perform the same on the next machine. This also makes sure you don’t forget settings or configurations that may end up being disastrous. Script management of machines and installations for the same reason.
  • Minimize use of the virtual machine console – Just like you would secure access to the physical machine, you should secure access and use sparingly the console.
  • Use UEFI secure boot when possible – If the OS supports it, you can use this to prevent changes to the VM.

Network Security

  • Isolate network traffic – Separation of network traffic into segments allows you to isolate important networks. A prime example of this is creating a management network that is separate from regular VM traffic. You can perform this easily using VMware NSX or even as simple as creating a separate subnet and locking that down virtually or physically to ports.
  • Use firewalls – Again using NSX this becomes really simple to create firewall and micro-segmentation. Mentioned above, you can also utilize firewalls in the OS but that can get unwieldy with 1,000s of VMs. Physical firewalls are a staple as well.
  • Consider Network Policies – Switches in your virtual environment have security policies you can implement to prevent malicious attacks. These are promiscuous mode, MAC address changes, and forged transmits.
  • Secure VM networking – same as above with securing OSs and firewalling.
  • VLANs – These can be used to segment your network and provide additional security. This also breaks up your broadcast domain which can cut down on unwanted broadcast traffic.
  • Secure connection to your Storage – Usually companies setup separate networks for their storage. This is for security but also performance. You can also implement authentication on your storage array such as CHAP. Fibre Channel is particularly secure as it is difficult to tap a fibre cable.

VCP 2019 Study Guide -Section 1

It’s been a while since I’ve done one of these. I did one for the VCP 6.0 and kind of miss it. I’ve decided to take a little different approach this time. I’m going to actually write it completely up as a single document and then slowly leak it out on my blog but also have the full guide available for people to use if they want. I’m not sure the usable life of this since there is a looming version on the horizon for VMware, but it will be a bit before they update the cert.

I’m also changing which certification I’m writing for. I originally did one for the delta. This time it will be the full. There shouldn’t be an issue using this for the delta, however. The certification, 2V0-21.19 is for vSphere version 6.7 and is a 70-question exam. You are required to pass with a score of no less than 300 and you are given 115 minutes to take it. This gives you about 40 seconds per question. Study well and if you don’t know something, don’t agonize over it. Mark it and come back. It is very possible a later question will job your memory or give you possible hints to the answer.

You will need to venture outside and interact with real people to take this test. No sitting at home in your pjs, unfortunately. You will need to register for the test on Pearson Vue’s Website here.

Standard disclaimer, I am sure I don’t cover 100% of the topics needed on the exam, as much as I might try. Make sure you use other guides and use your own research to help out. In other words, you can’t hold me liable if you fail

Section 1 – VMware vSphere Architectures and Technologies

Objective 1.1 – Identify the pre-requisites and components for vSphere implementation

The first part starts with installation requirements. There are two core components that make up vSphere. ESXi and vCenter. There several requirements for ESXi and for vCenter Server. I’ll cover them here one component at a time to better understand them.

vSphere ESXi Server

The ESXi Server is the server that does most of the work. This server is where you install virtual machines (VMs) and provides the needed resources for all your VMs to run. The documentation also talks about virtual appliances. Virtual appliances are nothing more than preconfigured VMs, usually running some variant of Linux.

There is an order to installation of vSphere, and the ESXi server is installed first. There are a number of requirements for installation. Some of them I will generalize, as otherwise this would be a Study Textbook and not a guide.

  • Supported server platform. The best way to determine if your server is supported is to check against the VMware Compatibility Guide here.
  • At least two CPU cores. This shouldn’t be that big of an issue these days when you have companies such as AMD having mainstream 16-core processors and 64-core Server processors.
  • 64-bit processor released after 2006.
  • The NX/XD bit to be enabled in the BIOS. This is also known as the No-Execute bit (or eXecute Disable) and allows you to segregate areas of memory for use with code or data. Enabling this protects against certain forms of malware exploits.
  • Minimum of 4 GB of RAM. You hopefully will have at least 6-8 in order to give adequate space for VMs to run.
  • Support for Intel VT-x or AMD RVI. This isn’t an issue for most current processors. Only extremely inexpensive or old processors would not have this option in the BIOS.
  • 1+ Gigabit or faster Ethernet controllers. Same as above, make sure it is a supported model.
  • SCSI disk or RAID LUN. These are seen as local drives. This allows you to use them as “scratch” partitions. A scratch partition is a disk partition used by VMware to host logs, updates, or other temporary files.
  • SATA drives. You can use a SATA drive but by default these are considered “remote” not local. This prevents them from being used for that scratch partition.

You can use UEFI BIOS mode with vSphere 6.7+ or just regular BIOS mode. Once you have installed ESXi, you should not change the mode from one to the other in the BIOS, or you may need to re-install (it won’t boot). The actual display message is “Not a VMware boot bank” that you might encounter.

VMware requires a minimum boot device with 1 GB of storage. When booting from a local disk, 5.2 GB is needed to allow creation for the scratch disk and the VMFS (VMware File System) volume. If you don’t have enough space, or you aren’t using a local drive, the scratch partition will be placed in a RAMDISK or all in RAM. This is not persistent through reboots of the physical machine, and will give you a message (nagging you) until you do provide a location for it. It actually is a good thing to have though, as any dump files (code from ESXi describing what went wrong when a crash occurs) are stored there.

You can Auto Deploy a host as well – this is when you have no local disks at all and are using shared storage to install and run ESXi software. If you do use this method, you don’t need to have a separate LUN or shared disk, set aside for each host. You can share a single LUN across multiple hosts.

Actual installation of the ESXi software is straightforward. You can perform an Interactive, scripted or Auto Deploy installation. The latter requires a bit of preparation before you can do that and a number of other components. You will need to have TFTP server setup and make changes to your DHCP server to allow this to happen. There is more that goes into the Auto Deploy, but I won’t cover that here as the cert exam shouldn’t go too far in depth. For interactive installation you can create a customized ISO if you require specific drivers that aren’t included on the standard VMware CD

vSphere vCenter Server

The vCenter Server component of vSphere allows you to manage and aggregate your server hardware and resources. vCenter is where a lot of the magic lies. Using vCenter Server you can migrate running VMs between hosts and so much more. VMware makes available the vCenter Server Appliance or VCSA. This is a preconfigured Linux-based VM that is deployed into your environment. There are two main group of services that run on the appliance, vCenter Server and the Platform Services Controller. You run both of those together in what is known as an “embedded” installation or you can separate the Platform Services Controller (PSC) for larger environments. While you can install vCenter on Windows as well, VMware will no longer support that model for the next major release of vSphere.

There are a few software components that make up the vCenter Server Appliance. They include:

  • Project Photon OS 1.0 – This is the Linux variant used for the operating system.
  • Platform Services Controller group of infrastructure services
  • vCenter Server group of services
  • PostgreSQL – This is the database software used.
  • VMware vSphere Update Manager Extension or VUM. This is one way you can keep your vSphere software up to date.

While past versions of vCenter Server Appliance were a bit less powerful, since 6.0 they have been considerably more robust. This one is no exception, with it scaling to 2,000 hosts and 35,000 VMs.

If you do decide to separate the services it is good to know what services are included with which component. They are:

  • vCenter Platform Services Controller or PSC – contains Single Sign On, Licensing, Lookup service, and the Certificate Authority.
  • vCenter Server – contains vCenter Server, vSphere client, vSphere Web Client, Auto Deploy, and the Dump Collector. It also contains the Syslog Collector and Update Manager.

If you go with a distributed model, you need to install the PSC first, since that machine houses authentication services. If there is more than one PSC, you need to setup them one at a time before you create the vCenter Server/s. Multiple vCenter Servers can be setup at the same time.

The installation process consists of two parts for the VCSA when using the GUI installer, and one for using CLI. For the GUI installation, the first stage deploys the actual appliance. The second guides you through the configuration and starts up its services.

If using CLI to deploy, you run a command against a JSON file that has all the values needed to configure the vCenter Server. The CLI installer grabs values inside the JSON file and generates a CLI command that utilizes the VMware OVF Tool. The OVF Tool is what actually installs the appliance and sets the configuration.

Hardware Requirements vary depending on the deployment configuration. Here are a few tables to help guide you:

Embedded vCenter with PSC

Environment vCPUs Memory
Tiny (up to 10 hosts or 100 VMs) 2 10 GB
Small (up to 100 hosts or 1,000 VMs) 4 16 GB
Medium (up to 400 hosts or 4,000 VMs 8 24 GB
Large (up to 1,000 hosts or 10,000 VMs) 16 32 GB
X-Large (up to 2,000 hosts or 35,000 VMs) 24 48 GB

If you are deploying an external PSC appliance you need 2 vCPUs and 4 GB RAM and 60 GB storage for each.

Environment Default Storage Size Large Storage Size X-Large Storage Size
Tiny (up to 10 hosts or 100 VMs) 250 GB 775 GB 1650 GB
Small (up to 100 hosts or 1,000 VMs) 290 GB 820 GB 1700 GB
Medium (up to 400 hosts or 4,000 VMs 425 GB 925 GB 1805 GB
Large (up to 1,000 hosts or 10,000 VMs) 640 GB 990 GB 1870 GB
X-Large (up to 2,000 hosts or 35,000 VMs) 980 GB 1030 GB 1910 GB

Both the vCenter Server and PSC appliance must be installed on a minimum ESXi 6.0 host or later.

Make sure that DNS is working and the name you choose for your vCenter Server Appliance is resolvable before you start installation.

Installation happens from a client machine and needs certain requirements. If using Windows, you can use Windows 7-10, or Server 2012-2016 (x64). Linux users can use SUSE 12 and Ubuntu 14.04. If Mac OS, 10.9-11 and Sierra are all supported.

Installation on Microsoft Windows

This may be covered on the test, but I can’t imagine too many questions since it is being deprecated. That being said, vCPUs and Memory are the same as the appliance. Storage sizes are different. They are:

Default Folder Embedded vCenter PSC
Program Files 6 GB 6 GB 1 GB
ProgramData 8 GB 8 GB 2 GB
System folder (to cache the MSI installer) 3 GB 3 GB 1 GB

As far as OS’s, it requires a minimum of Microsoft Windows 2008 SP2 x64. For databases you can use the built-in PostgreSQL for up to 20 hosts and 200 VMs. Otherwise you will need Oracle or Microsoft SQL Server.

Objective 1.2 – Identify vCenter high availability (HA) requirements

vCenter High Availability is a mechanism that protects your vCenter Server against host and hardware failures. It also helps reduce downtime associated with patching your vCenter Server. This is from the Availability guide. Honestly, I’m not sure on the last one as it seems as if you are upgrading with an embedded installation, your vCenter might be unavailable for a bit but not very long (unless there is a failure). If distributed, you have other PSCs and vCenter Servers to take up the load. So, I’m not sure if it really works for me in that scenario or not. Perhaps someone might enlighten me later and I’m not thinking it all the way through. Either way…..

vCenter Server High Availability uses 3 VCSA nodes. It uses two full VCSA nodes and a witness node. One VCSA node is active and one passive. They are connected by a vCenter HA network that is created when you set this up. This network is used to replicate data across and connectivity to the witness node. Requirements are:

  • ESXi 5.5 or later is required. 3 Hosts are strongly recommended to house all the appliances on different physical hosts. Using DRS is also recommended.
  • If using a management vCenter (for the management cluster), vCenter Server 5.5+ is required
  • vCenter Server Appliance 6.5+ is required. Your Deployment size should be “Small” at a minimum. You can use VMFS, NFS, or vSAN datastores.
  • Latency on the network used for the HA network must be less than 10 ms. It should be on a separate subnet than the regular Management Network.
  • A single vCenter Server Standard license is required.

Objective 1.3 – Describe storage types for vSphere

vSphere supports multiple types of storage. I will go over the main types. Local and Networked Storage.

Local Storage

Local storage is storage connected directly to the server. This can include a Direct Attached Storage (DAS) enclosure that is connected to an external SAS card or storage in the server itself. ESXi supports SCSI, IDE, SATA, USB, SAS, flash, and NVMe devices. You cannot use IDE/ATA or USB to store virtual machines. Any of the other types can host VMs. The problem with local storage is the server is a single point of failure or SPOF. If the server fails, no other server can access the VM. There is a special configuration that you can use that would allow sharing local storage however, and that is vSAN. vSAN requires flash drives for cache and either flash or regular spinning disks for capacity drives. These are aggregated across servers and collected into a single datastore or drive. VM’s are duplicated across servers so if one goes down, access is still retained and the VM can still be started and accessed.

Network Storage

Network Storage consists of dedicated enclosures that have controllers that run a specialized OS on them. There are several types but they share some things in common. They use a high-speed network to share the storage, and they allow multiple hosts to read and write to the storage concurrently. You connect to a single LUN through only one protocol. You can use multiple protocols on a host for different LUNs

Fibre Channel or FC is a specialized type of network storage. FC uses specific adapters that allow your server to access it, known as Fibre Channel Host Bus Adapters or HBAs. Fibre Channel typically uses cables of glass to transport their signal, but occasionally use copper. Another type of Fibre Channel can connect using a regular LAN. It is known as Fibre Channel over Ethernet or FCoE.

ISCSI is another storage type supported by vSphere. This uses regular ethernet to transport data. Several types of adapters are available to communicate to the storage device. You can use a hardware ISCSI adapter or a software. If you use a hardware adapter, the server offloads the SCSI and possibly the network processing. There are dependent hardware and independent hardware adapters. The first still needs to use the ESXi host’s networking. Independent hardware adapters can offload both the ISCSI and networking to it. A software ISCSI adapter uses a standard ethernet adapter and all the processing takes place in the CPU of the hosts.

VMware supports a new type of adapter known as iSER or ISCSI Extensions for RDMA. This allows ESXI to use RDMA protocol instead of TCP/IP to transport ISCSI commands and is much faster.

Finally, vSphere also supports the NFS 3 and 4.1 protocol for file-based storage. Unlike the rest of the storage mentioned above, this is presented as a share to the host instead of block-level raw disks. Here is a small table on networked storage for easier perusal.

Technology Protocol Transfer Interface
Fibre Channel FC/SCSI Block access FC HBA
Fibre Channel over Ethernet (FCoE) FCoE / SCSI Block access
  • Converged Network Adapter
  • NIC with FCoE support
ISCSI ISCSI Block access
  • ISCSI adapter (dependent or independent)
  • NIC (Software adapter)
NAS IP / NFS File level Network adapter

Objective 1.4 – Differentiate between NIOC and SIOC

NIOC = Network I/O Control
SIOC = Storage I/O Control

Network I/O Control allows you to determine and shape bandwidth for your vSphere networks. They work in conjunction with Network Resource Pools to allow you to determine bandwidth for specific types of traffic. You enable NIOC on a vSphere Distributed Switch and then set shares according to needs in the configuration of the VDS. This is a feature requiring Enterprise Plus licensing or higher. Here is what it looks like in the UI.

Storage I/O Control allows cluster wide storage I/O prioritization. You can control the amount of storage I/O that is allocated to virtual machines to get preference over less important virtual machines. This is accomplished by enabling SIOC on the datastore and set shares and upper limit IOPS per VM. SIOC is enabled by default on SDRS clusters. Here is what the screen looks like to enable it.

Objective 1.5 – Manage vCenter inventory efficiently

There are several tools you can use to manage your inventory easier. vSphere allows you to use multiple types of folders to hold your vCenter inventory. Folders can also be used to assign permissions and set alarms to objects. You can put multiple types of objects inside of a folder but only one type per folder. For example, if you had VMs inside a folder, you wouldn’t be able to add a host to it.

vApps is another way to manage objects. They can be used to manage other attributes as well. You can assign resources and even startup order with vApps.

You can use Tags and Categories to better organize and make your inventory searchable. You create them off the main menu. There is a menu item called Tags and Custom Attributes


You can create Categories such as “Operating Systems” and then Tags such as “Window 2012” and others. This sort of action will make your VMs easier to manage and search for things. You then can see the tags on the summary of the VM as shown here.



Tags can be used for rules on VMs too. You can see this (although a bit branded) by reading a blog post I wrote for Rubrik here.

Objective 1.6 – Describe and differentiate among vSphere HA, DRS, and SDRS functionality

HA is a feature designed for VM resilience. The other two, DRS and SDRS are for managing resources. HA stands for High Availability. HA works by pooling all the hosts and VMs into a cluster. Hosts are monitored and in the event of a failure, VMs are re-started on another host.

DRS stands for Distributed Resource Scheduling. This is also a feature used on a host cluster. DRS is a vSphere feature that will relocate VMs and make recommendations on host placement based on current load.

Finally, SDRS is Distributed Resource Scheduling for Storage. This is enabled on a Datastore cluster and just like DRS will relocate the virtual disks of a VM or make recommendations based on usage and I/O Load.

You can adjust whether or not DRS/SDRS takes any actions or just makes recommendations.

Objective 1.7 – Describe and identify resource pools and use cases

The official description of a resource pool is a logical abstraction for flexible management of resources. My unofficial description is a construct inside vSphere that allows you to partition and control resources to specific VMs. Resource pools partition memory and CPU resources.

You start with the root resource pool. This is the pool of resources that exists at the host level. You don’t see it, but it’s there. You create a resource pool under that that cords off resources. It’s also possible to nest resource pools. For example, if you had a company and inside that company you had departments, you could partition resources into the company and departments. This works as a hierarchy. When you create a child resource pool from a parent you are further diminishing your resources unless you allow it to draw more from further up the hierarchy.

Why use resource pools? You can delegate control of resources to other people. There is isolation between pools so resources for one doesn’t affect another. You can use resource pools to delegate permissions and access to VMs. Resources pools are abstracted from the hosts’ resources. You can add and remove hosts without having to make changes to resource allocations.

You can identify resources pools by their icon.


When you create a resource pool, you have a number of options you will need to make decisions on.

Shares – Shares can be any arbitrary number you make up. All the shares from all the resource pools added up will equal to a total number. That total number will be total of the root pool. For example. If you have two pools that each have 8000 shares, there are a total of 16,000 shares and each resource pool makes up half of the total, or 8,000/16,000. There are default options available as well in the form of Low, Normal, and High. Those will equal 1,000/2,000, and 4,000 shares respectively.

Reservations – This is a guaranteed allocation of CPU or memory resources you are giving to that pool. Default is 0. Reserved resources are held by that pool regardless if there are VMs inside it or not.

Expandable Reservation is a check box that allows the pool to “borrow” resources from its parent resource pool. If this is the parent pool, then it will borrow from the root pool.

Limits – specify the upper limit of what a resource pool can grab from either CPU or memory resources. When teaching VMware’s courses, unless there is a definite reason or need for it, you shouldn’t use limits. While shares only work when there is contention (fighting among VMs for resources) limits create a hard stop for the VM even if resources are high. Usually there is no reason to limit how much resources a VM would be able to use if there is no contention.

In past exams, there were questions asking you calculate resources given a number of resource pools. Make sure you go over how to do that.

Objective 1.8 – Differentiate between VDS and VSS

VDS and VSS are networking constructs in vSphere. VDS is Virtual Distributed Switch and VSS is Virtual Standard Switch.

Virtual Standard Switch is the base switch. It is what is installed by default when ESXi is deployed. It has only a few features and requires you to configure a switch on every host. As you can imagine, this can get tedious and difficult to make these exactly the same. Which is what you need to do in order for VM’s to seamlessly move across hosts. You could create a host profile template to make sure they are the same, but then you lose the dynamic nature of switches.

Standard Switches create a link between physical NICs and virtual NICs. You can name them essentially whatever you want, and you can assign VLAN IDs. You can shape traffic but only outbound. Here is a picture I lifted from the official documentation for a pictorial representation of a VSS.


VDSs on the other hand add a management plane to your networking. Why is this important? It allows you to control all your host networking through one UI. This does require a vCenter and a certain level of licensing. Enterprise Plus or higher unless you buy vSAN licensing. Essentially you are still adding a switch to every host, just a little bit fancier one that can do more things and you only have to change once.

There are different versions of VDS you can create which are based on the version they were introduced with. Each version has its own features. A higher version retains all the features of the lower one and adds to it. Some of those features include Network I/O Control (NIOC) which allows you to shape your bandwidth incoming and outgoing. VDS also includes a rollback ability so that if you make a change and it loses connectivity, it will revert the changes automatically.

Here is a screenshot of me making a new VDS and some of the features that each version adds:


Here is a small table showing the differences between the switches.

Feature vSphere Standard Switch vSphere Distributed Switch
VLAN Segmentation Yes Yes
802.1q tagging Yes Yes
NIC Teaming Yes Yes
Outbound traffic shaping Yes Yes
Inbound traffic shaping No Yes
VM port blocking No Yes
Private VLANs No Yes (3 Types – Promiscuous, Community, Isolated)
Load Based Teaming No Yes
Network vMotion No Yes
NetFlow No Yes
Port Mirroring No Yes
LACP support No Yes
Backup and restore network configuration No Yes
Link Layer Discovery Protocol No Yes
NIOC No Yes

Objective 1.9 – Describe the purpose of cluster and the features it provides

A vSphere cluster is a group of ESXi host machines. When grouped together, vSphere aggregates all of the resources of each host and treats it like a single pool. There are a number of features and capabilities you can only do with clusters. Here is a screenshot of what you have available to you. I will now go over them.


Under Services you can see DRS and vSphere Availability (HA). You also see vSAN on the list, as vSAN requires a cluster as well. We’ve already covered HA and DRS a bit but there are more features in each.

DRS

DRS Automation – This option lets vSphere make VM placement decisions or recommendations for placement. I trust them with Fully Automated as you can see in the window above. There are a few situations here and there where you might not want to, but 90% of the time I would say trust it. The small use cases where you might turn it off might be something like vCD deployments, but you could also just turn down the sensitivity instead. You have the following configuration options:

Automation

  • Automation Level – options are Fully Automated, Partially Automated and Manual. Fully automated provides placement at VM startup and moves VMs as needed based on Migration Threshold. Partially Automated places the VM at startup and makes recommendations for moving but doesn’t actually move without approval. Manual will only make recommendations and requires you to accept them (or ignore).
  • Migration Threshold – This is how sensitive the cluster is to resource imbalance. It is based on a scale of 1-5, 5 being the most sensitive. If you set it to 5, if vSphere thinks there is any benefit to moving the VM to a different host, it will do so. 1 is lazy and won’t move anything unless it has to satisfy cluster constraints. 3 is default and usually a good balance.
  • Predictive DRS – Using real-time metrics and metrics pulled in through vRealize Operations Manager, vSphere tried to predict (based on past performance) when additional resources might be needed by a VM and move it to a host that can provide them.
  • Virtual Machine Automation – This allows you to override DRS settings for individual VMs.

Additional Options

  • VM Distribution – This allows you to try to spread the number of VMs evenly through your cluster hosts. This prevents any host from being too heavy with VMs even though it might have the resources to support them.
  • Memory Metric for Load Balancing – This load balances your VMs across hosts based on consumed memory instead of active memory. This can bite you if you overcommit a host’s memory if all your hosts actually start using the memory you have assigned to them. So don’t overcommit if you use this setting.
  • CPU Over-Commitment – You can limit the amount of over-commitment for CPU resources. This is done on a ratio basis. (20 vCPUs : 1 physical CPU for example)

Power Management

  • DPM – Distributed Power Management (should be Dynamic Power Management ). This allows you to keep the hosts turned off unless they are needed to satisfy resource needs. This saves power in your datacenter. It will use Wake-On-LAN, IPMI, iDRAC, or iLO to turn the hosts on. You can override individual hosts.
  • Automation Level – You can set this to Manual or Automatic
  • DPM Threshold – Just like DRS Migration Threshold, this changes sensitivity on a scale of 1-5, with 5 being the most sensitive. If resource utilization gets high, DPM will turn on another host to help with the load.

vSphere Availability (HA)

There are a number of configuration options to configure. Most defaults are decent if you don’t have a specific use case. Let’s go through them.

  • Proactive HA – This feature receives messages from a provider like Dell’s Open Manage Integration plugin and based on those messages will migrate VMs to a different host due to impending doom of the original host. It can make recommendations on the Manual mode or Automatically. After all VMs are off the host, you can choose how to remediate the sick host. You can either place it in maintenance mode, which prevents running any workloads on it. You can also put it in Quarantine mode which will allow it to run some workloads if performance is affected. Or a mix of those with…. Mixed Mode.
  • Failure Conditions and responses – This is a list of possible host failure scenarios and how you want vSphere to respond to them. This is better and give you wayyy more control than in the past.
  • Admission Control – What good is a feature to restart VMs if you don’t have enough resources to do so? Not very. Admission Control is the gatekeeper that makes sure you have enough resources to restart your VMs in the case of host failure. You can ensure this a couple of ways. Dedicated failover hosts, cluster resource percentage, slot policy, or you can disable it. Dedicated hosts are like a dedicated hot spare in a RAID. They do no work or run no VMs until there is a host failure. This is the most expensive (other than a failure itself). Slot policy takes the largest VM’s CPU and the largest VM’s memory (can be two different VMs) and makes that into a “slot” then it determines how many slots your cluster can satisfy. Then it looks at how many hosts can fail and still keep all VMs powered on. Cluster Resources Percentage looks at total resources needed and total available and tries to keep enough to lose a certain number of hosts you specify. You can also override and set a specific percentage to reserve. For any of these policies, if the cluster can’t satisfy needed VMs it will prevent new VMs from turning on.
  • Heartbeat Datastores – This is used to monitor hosts and VMs when the HA network as failed. Using this it can determine if the host is still running or if a VM is still running by seeing the lock files. This automatically tries to make sure that it has at least 2 datastores that all the hosts have connectivity to. You can specify more or specific datastores to use.
  • Advanced Options – You can use this to set advanced options for the HA Cluster. One might be setting a second gateway to determine host isolation. To use this you will need to set two options. 1) das.usedefaultisolationaddress and 2) das.isolationaddress[…] The first specifies not to use the default gateway and the second sets additional addresses.

Clusters allow for more options then I’ve already listed. You can set up Affinity and Anti-Affinity rules. These are rules setup to keep VMs on certain hosts, or away from others. You might want a specific VM running on a certain host due to licensing or for a specific piece of hardware only a specific host has. Anti-affinity rules might be setup for something like Domain Controllers. You wouldn’t place them on the same host for availability reasons, so you would setup an Anti-Affinity rule so that both of them would always be on different hosts.

EVC Mode is also a cool option enabled by clusters. EVC or Enhanced vMotion Compatibility allows you to take different generation hosts and still allows you to migrate them. Different generation processors have different features and options on them. EVC masks the newer ones so there is a level feature set. This means you might not receive all the benefits of a newer processors though. And a lot of newer processors are more efficient therefore lower clock speed. If you mask off those efficiencies, then you are just left with the lower clock speeds. Be mindful of that when you use it. You can enable it on a per VM basis making it more useful.

Objective 1.10 – Describe virtual machine (VM) file structure

A VM is nothing more than files and software. Hardware is emulated. It makes sense to understand the files that make up a VM then. Here is a picture depicting files you might see in a VM folder lifted from VMware’s book.


Now as for an explanation of those files.

  • .vmx file – This is the file vSphere uses to know what hardware to present. This is essentially a list of the hardware and locations of other files (like the virtual disk). It is also the file used when adding a VM to vSphere inventory.
  • .vswp – This file is what vSphere uses much the same way Microsoft uses a page file. When it runs out of actual physical memory or experiences contention on the host, it will use this file to make up the difference. As expected, since this is using a disk instead of RAM, it will be much slower.
  • .nvram – This file emulates a hardware BIOS for a VM.
  • .log – These are log files for the individual VM. It captures actual errors from the VM such as when a Microsoft Windows machine blue screens (crashes). These can be used for troubleshooting purposes. The file name increments vSphere maintains up to 6 log files at a time. vSphere will delete the oldest file first as it needs to.
  • .vmtx – This only occurs if the VM is a template. In that case the. vmx will change to a. vmtx
  • .vmdk – This is the disk descriptor file. No actual data from the VM is housed here. Rather the location of the blocks of the actual disk and other information about it are found inside.
  • -flat.vmdk – This is the actual data of the VM. This is hidden unless you look in the CLI. If the VM has multiple disks there will be more than one of this and the. vmdk
  • .vmsd – This is the snapshot list. If there are no snapshots, then this file is empty.
  • -delta.vmdk – this file is the delta disk if there is a active snapshot. The original flat-vmdk is frozen and all I/O is routed to this -delta instead.
  • -.ctk – Not shown in the graphic above, this is the Change block tracking file. This is used for programs like vSphere Data Protection or other backup programs.
  • -.lck – Also not shown in the graphic, this is a lock file placed in the directory showing that the VM is turned on (or the host thinks it is).

Objective 1.11 – Describe vMotion and Storage vMotion technology

There are several ways to move VMs around in your environment. vMotion and Storage vMotion are two types of migration. The first thing I do, when I taught this, was ask, what do you really need to move to move a VM? The main piece of what make up a VM is the memory. CPU resources are used briefly. When you perform a vMotion, what you are really doing is just moving active memory to a different host. The new host will then start working on tasks with the CPU. All pointers in the files that originally point to the first host have to be changed as well. So how does this work?

  1. First copy pass of the memory is moved over the new host. All users continue to use the VM on the old host and possibly make changes. vSphere will note these changes in a modified memory bitmap on the source host.
  2. After the first pass happens, the VM is quiesced or paused. During this pause, the modified memory bitmap data is copied to the new host.
  3. After the copy, the VM begins running on the new host. A reverse ARP is sent that notifies everyone that this is where the VM is now and forward requests to the new address.
  4. Users now use the VM on the new host.

Storage vMotion is moving the VM files to another datastore. Let’s go through the steps

  1. Initiate the svMotion in the UI.
  2. vSphere uses something called the VMkernel data mover or if you have a storage array that supports vSphere Storage APIs Array Integration or VAAI to copy the data.
  3. A new VM process is started
  4. Ongoing I/O is split using a “mirror driver” to be sent to the old and new vmdks while this is ongoing.
  5. vSphere cuts over to the new VM files.

This is slightly different than the vMotion process as it only needs one pass to copy all the files due to using the mirror driver.

There is one other type of migration called Cross-Host vSphere vMotion or Enhanced vMotion depending on who you ask. This is a combination of vMotion and svMotion at the same time. This is also notable because this allows you to migrate a VM while using local storage.

There are limitations on vMotion and svMotion. You need to be using the same type of CPUs (Intel or AMD) and the same generation, unless you are using EVC. You should also make sure you don’t have any hardware that the new host can’t support. CD-ROMs etc. vMotion will usually perform checks before you initiate it and let you know if there are any issues. You can migrate up to 4 VMs at the same time on a 1Gbps or 8 VMs on a 10Gbps network per host. 128 concurrent vMotion is the limit per VMFS datastore.

Can you upgrade and Upsize your VCSA?

While brainstorming about one of our labs, the question was raised on whether you can upsize your VCSA while upgrading to a newer version. Specifically, from 6.5u2 to 6.7U1 (build 8815520 to 11726888). We wanted to upgrade to the latest version but we also believe we had outgrown the original VCSA size that we deployed. VMware has made this really simple. I did a quick test in my home lab, and that is what this post will be based on.

To start you obtain the VCSA .iso that you are going to upgrade to. After you download it, you go ahead and mount it. Just like you would with a normal install/upgrade, run the appropriate installer. For me it is the Windows one which is located under the \vcsa-ui-installer\win32 directory. The installer.exe launches the following window:

We chose the Upgrade icon here. The next screen lets you know this is a 2-stage process. How it will perform this is:

  1. It will deploy a new appliance that will be your new vCenter.
  2. All of your current data and configurations will be moved over from the old VCSA to the new.

After the copy process is complete it will power off the old VCSA but not delete it. Move to the next screen and accept the License Agreement. The third screen looks like this:

You need to put in the information of the source VCSA that you will be migrating from here. Once you click Connect To Source it will ask you for more info. Specifically, what is your source VCSA being hosted on. This could be a single host or it could be another vCenter.

You will be asked to accept the SSL Certificates. The next screen will ask you for where you are going to put the new appliance. This can be either a host or a vCenter instance.

Step 5 is setting up the target appliance VM. This is the new VCSA that you will be deploying. Specifically, what do you want to name it and what the root password is.

Step 6 is where we can change the size of the deployment. I had a tiny in the previous deployment and I decided that was too small. This time I want to go one step up to the “Small” size. You can see the deployment requirements listed below in a table.

Next step is configuring your network settings.

And the last screen to this stage is just to confirm all your settings. This will then deploy the appliance (during which you grab a nice glass of scotch and wait…preferably something nice like my Macallan 12yr)
Once that has been finished, you are off to Part 2 of the process: Moving your information over. The first screen you will be presented with (after running checks) is Select Upgrade Data. You will be given a list of the data you can move over and approximate number of scotches you will need for the wait. (Maybe that last part is made up, but hey you can find out anyway amirite?)

Since my environment that I am moving is relatively pristine, I don’t have much data to move. It is estimating 39 min but it actually took less time. You make your decision (seems pretty straightforward what kind of data you would be interested in) and move to the next screen which is whether you join VMware’s CEIP program or Customer Experience Program. The last screen before the operation kicks off is a quick summary and then a check box at the bottom asking you to make sure you were a decent sysadmin and backed up the source vCenter before you start this process. I personally did not on this, but like I said there was no data on it anyways. So we kick off the operation.

Clicking Finish gives you notification box that the source vCenter will be shutdown once this is complete. Acknowledge that and away we go!

Once completed successfully, you are given the prompt to enter into your new vCenter which I have done here and here is the brand new shiny.

I will also link the video here to the process. The video is about 15 min long (truncated from about 45 min total) Disclaimers include: There are many more things you will need to think about before doing this to a production environment. Among them are, will all the versions of VMware products I have work together. You can find that out by referencing here:

https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop
Interoperability Matrix for VMware products

You also need to make sure you can Upgrade from your current version to the selected version by going to the same page above but on the Upgrade Path

Another really important thing to consider is what order you need to upgrade your products. You can find that for 6.7 here.

https://kb.vmware.com/s/article/53710
Update sequence for vSphere 6.7 and its compatible VMware products (53710)