VCP 2019 Study Guide -Section 1

It’s been a while since I’ve done one of these. I did one for the VCP 6.0 and kind of miss it. I’ve decided to take a little different approach this time. I’m going to actually write it completely up as a single document and then slowly leak it out on my blog but also have the full guide available for people to use if they want. I’m not sure the usable life of this since there is a looming version on the horizon for VMware, but it will be a bit before they update the cert.

I’m also changing which certification I’m writing for. I originally did one for the delta. This time it will be the full. There shouldn’t be an issue using this for the delta, however. The certification, 2V0-21.19 is for vSphere version 6.7 and is a 70-question exam. You are required to pass with a score of no less than 300 and you are given 115 minutes to take it. This gives you about 40 seconds per question. Study well and if you don’t know something, don’t agonize over it. Mark it and come back. It is very possible a later question will job your memory or give you possible hints to the answer.

You will need to venture outside and interact with real people to take this test. No sitting at home in your pjs, unfortunately. You will need to register for the test on Pearson Vue’s Website here.

Standard disclaimer, I am sure I don’t cover 100% of the topics needed on the exam, as much as I might try. Make sure you use other guides and use your own research to help out. In other words, you can’t hold me liable if you fail

Section 1 – VMware vSphere Architectures and Technologies

Objective 1.1 – Identify the pre-requisites and components for vSphere implementation

The first part starts with installation requirements. There are two core components that make up vSphere. ESXi and vCenter. There several requirements for ESXi and for vCenter Server. I’ll cover them here one component at a time to better understand them.

vSphere ESXi Server

The ESXi Server is the server that does most of the work. This server is where you install virtual machines (VMs) and provides the needed resources for all your VMs to run. The documentation also talks about virtual appliances. Virtual appliances are nothing more than preconfigured VMs, usually running some variant of Linux.

There is an order to installation of vSphere, and the ESXi server is installed first. There are a number of requirements for installation. Some of them I will generalize, as otherwise this would be a Study Textbook and not a guide.

  • Supported server platform. The best way to determine if your server is supported is to check against the VMware Compatibility Guide here.
  • At least two CPU cores. This shouldn’t be that big of an issue these days when you have companies such as AMD having mainstream 16-core processors and 64-core Server processors.
  • 64-bit processor released after 2006.
  • The NX/XD bit to be enabled in the BIOS. This is also known as the No-Execute bit (or eXecute Disable) and allows you to segregate areas of memory for use with code or data. Enabling this protects against certain forms of malware exploits.
  • Minimum of 4 GB of RAM. You hopefully will have at least 6-8 in order to give adequate space for VMs to run.
  • Support for Intel VT-x or AMD RVI. This isn’t an issue for most current processors. Only extremely inexpensive or old processors would not have this option in the BIOS.
  • 1+ Gigabit or faster Ethernet controllers. Same as above, make sure it is a supported model.
  • SCSI disk or RAID LUN. These are seen as local drives. This allows you to use them as “scratch” partitions. A scratch partition is a disk partition used by VMware to host logs, updates, or other temporary files.
  • SATA drives. You can use a SATA drive but by default these are considered “remote” not local. This prevents them from being used for that scratch partition.

You can use UEFI BIOS mode with vSphere 6.7+ or just regular BIOS mode. Once you have installed ESXi, you should not change the mode from one to the other in the BIOS, or you may need to re-install (it won’t boot). The actual display message is “Not a VMware boot bank” that you might encounter.

VMware requires a minimum boot device with 1 GB of storage. When booting from a local disk, 5.2 GB is needed to allow creation for the scratch disk and the VMFS (VMware File System) volume. If you don’t have enough space, or you aren’t using a local drive, the scratch partition will be placed in a RAMDISK or all in RAM. This is not persistent through reboots of the physical machine, and will give you a message (nagging you) until you do provide a location for it. It actually is a good thing to have though, as any dump files (code from ESXi describing what went wrong when a crash occurs) are stored there.

You can Auto Deploy a host as well – this is when you have no local disks at all and are using shared storage to install and run ESXi software. If you do use this method, you don’t need to have a separate LUN or shared disk, set aside for each host. You can share a single LUN across multiple hosts.

Actual installation of the ESXi software is straightforward. You can perform an Interactive, scripted or Auto Deploy installation. The latter requires a bit of preparation before you can do that and a number of other components. You will need to have TFTP server setup and make changes to your DHCP server to allow this to happen. There is more that goes into the Auto Deploy, but I won’t cover that here as the cert exam shouldn’t go too far in depth. For interactive installation you can create a customized ISO if you require specific drivers that aren’t included on the standard VMware CD

vSphere vCenter Server

The vCenter Server component of vSphere allows you to manage and aggregate your server hardware and resources. vCenter is where a lot of the magic lies. Using vCenter Server you can migrate running VMs between hosts and so much more. VMware makes available the vCenter Server Appliance or VCSA. This is a preconfigured Linux-based VM that is deployed into your environment. There are two main group of services that run on the appliance, vCenter Server and the Platform Services Controller. You run both of those together in what is known as an “embedded” installation or you can separate the Platform Services Controller (PSC) for larger environments. While you can install vCenter on Windows as well, VMware will no longer support that model for the next major release of vSphere.

There are a few software components that make up the vCenter Server Appliance. They include:

  • Project Photon OS 1.0 – This is the Linux variant used for the operating system.
  • Platform Services Controller group of infrastructure services
  • vCenter Server group of services
  • PostgreSQL – This is the database software used.
  • VMware vSphere Update Manager Extension or VUM. This is one way you can keep your vSphere software up to date.

While past versions of vCenter Server Appliance were a bit less powerful, since 6.0 they have been considerably more robust. This one is no exception, with it scaling to 2,000 hosts and 35,000 VMs.

If you do decide to separate the services it is good to know what services are included with which component. They are:

  • vCenter Platform Services Controller or PSC – contains Single Sign On, Licensing, Lookup service, and the Certificate Authority.
  • vCenter Server – contains vCenter Server, vSphere client, vSphere Web Client, Auto Deploy, and the Dump Collector. It also contains the Syslog Collector and Update Manager.

If you go with a distributed model, you need to install the PSC first, since that machine houses authentication services. If there is more than one PSC, you need to setup them one at a time before you create the vCenter Server/s. Multiple vCenter Servers can be setup at the same time.

The installation process consists of two parts for the VCSA when using the GUI installer, and one for using CLI. For the GUI installation, the first stage deploys the actual appliance. The second guides you through the configuration and starts up its services.

If using CLI to deploy, you run a command against a JSON file that has all the values needed to configure the vCenter Server. The CLI installer grabs values inside the JSON file and generates a CLI command that utilizes the VMware OVF Tool. The OVF Tool is what actually installs the appliance and sets the configuration.

Hardware Requirements vary depending on the deployment configuration. Here are a few tables to help guide you:

Embedded vCenter with PSC

Environment vCPUs Memory
Tiny (up to 10 hosts or 100 VMs) 2 10 GB
Small (up to 100 hosts or 1,000 VMs) 4 16 GB
Medium (up to 400 hosts or 4,000 VMs 8 24 GB
Large (up to 1,000 hosts or 10,000 VMs) 16 32 GB
X-Large (up to 2,000 hosts or 35,000 VMs) 24 48 GB

If you are deploying an external PSC appliance you need 2 vCPUs and 4 GB RAM and 60 GB storage for each.

Environment Default Storage Size Large Storage Size X-Large Storage Size
Tiny (up to 10 hosts or 100 VMs) 250 GB 775 GB 1650 GB
Small (up to 100 hosts or 1,000 VMs) 290 GB 820 GB 1700 GB
Medium (up to 400 hosts or 4,000 VMs 425 GB 925 GB 1805 GB
Large (up to 1,000 hosts or 10,000 VMs) 640 GB 990 GB 1870 GB
X-Large (up to 2,000 hosts or 35,000 VMs) 980 GB 1030 GB 1910 GB

Both the vCenter Server and PSC appliance must be installed on a minimum ESXi 6.0 host or later.

Make sure that DNS is working and the name you choose for your vCenter Server Appliance is resolvable before you start installation.

Installation happens from a client machine and needs certain requirements. If using Windows, you can use Windows 7-10, or Server 2012-2016 (x64). Linux users can use SUSE 12 and Ubuntu 14.04. If Mac OS, 10.9-11 and Sierra are all supported.

Installation on Microsoft Windows

This may be covered on the test, but I can’t imagine too many questions since it is being deprecated. That being said, vCPUs and Memory are the same as the appliance. Storage sizes are different. They are:

Default Folder Embedded vCenter PSC
Program Files 6 GB 6 GB 1 GB
ProgramData 8 GB 8 GB 2 GB
System folder (to cache the MSI installer) 3 GB 3 GB 1 GB

As far as OS’s, it requires a minimum of Microsoft Windows 2008 SP2 x64. For databases you can use the built-in PostgreSQL for up to 20 hosts and 200 VMs. Otherwise you will need Oracle or Microsoft SQL Server.

Objective 1.2 – Identify vCenter high availability (HA) requirements

vCenter High Availability is a mechanism that protects your vCenter Server against host and hardware failures. It also helps reduce downtime associated with patching your vCenter Server. This is from the Availability guide. Honestly, I’m not sure on the last one as it seems as if you are upgrading with an embedded installation, your vCenter might be unavailable for a bit but not very long (unless there is a failure). If distributed, you have other PSCs and vCenter Servers to take up the load. So, I’m not sure if it really works for me in that scenario or not. Perhaps someone might enlighten me later and I’m not thinking it all the way through. Either way…..

vCenter Server High Availability uses 3 VCSA nodes. It uses two full VCSA nodes and a witness node. One VCSA node is active and one passive. They are connected by a vCenter HA network that is created when you set this up. This network is used to replicate data across and connectivity to the witness node. Requirements are:

  • ESXi 5.5 or later is required. 3 Hosts are strongly recommended to house all the appliances on different physical hosts. Using DRS is also recommended.
  • If using a management vCenter (for the management cluster), vCenter Server 5.5+ is required
  • vCenter Server Appliance 6.5+ is required. Your Deployment size should be “Small” at a minimum. You can use VMFS, NFS, or vSAN datastores.
  • Latency on the network used for the HA network must be less than 10 ms. It should be on a separate subnet than the regular Management Network.
  • A single vCenter Server Standard license is required.

Objective 1.3 – Describe storage types for vSphere

vSphere supports multiple types of storage. I will go over the main types. Local and Networked Storage.

Local Storage

Local storage is storage connected directly to the server. This can include a Direct Attached Storage (DAS) enclosure that is connected to an external SAS card or storage in the server itself. ESXi supports SCSI, IDE, SATA, USB, SAS, flash, and NVMe devices. You cannot use IDE/ATA or USB to store virtual machines. Any of the other types can host VMs. The problem with local storage is the server is a single point of failure or SPOF. If the server fails, no other server can access the VM. There is a special configuration that you can use that would allow sharing local storage however, and that is vSAN. vSAN requires flash drives for cache and either flash or regular spinning disks for capacity drives. These are aggregated across servers and collected into a single datastore or drive. VM’s are duplicated across servers so if one goes down, access is still retained and the VM can still be started and accessed.

Network Storage

Network Storage consists of dedicated enclosures that have controllers that run a specialized OS on them. There are several types but they share some things in common. They use a high-speed network to share the storage, and they allow multiple hosts to read and write to the storage concurrently. You connect to a single LUN through only one protocol. You can use multiple protocols on a host for different LUNs

Fibre Channel or FC is a specialized type of network storage. FC uses specific adapters that allow your server to access it, known as Fibre Channel Host Bus Adapters or HBAs. Fibre Channel typically uses cables of glass to transport their signal, but occasionally use copper. Another type of Fibre Channel can connect using a regular LAN. It is known as Fibre Channel over Ethernet or FCoE.

ISCSI is another storage type supported by vSphere. This uses regular ethernet to transport data. Several types of adapters are available to communicate to the storage device. You can use a hardware ISCSI adapter or a software. If you use a hardware adapter, the server offloads the SCSI and possibly the network processing. There are dependent hardware and independent hardware adapters. The first still needs to use the ESXi host’s networking. Independent hardware adapters can offload both the ISCSI and networking to it. A software ISCSI adapter uses a standard ethernet adapter and all the processing takes place in the CPU of the hosts.

VMware supports a new type of adapter known as iSER or ISCSI Extensions for RDMA. This allows ESXI to use RDMA protocol instead of TCP/IP to transport ISCSI commands and is much faster.

Finally, vSphere also supports the NFS 3 and 4.1 protocol for file-based storage. Unlike the rest of the storage mentioned above, this is presented as a share to the host instead of block-level raw disks. Here is a small table on networked storage for easier perusal.

Technology Protocol Transfer Interface
Fibre Channel FC/SCSI Block access FC HBA
Fibre Channel over Ethernet (FCoE) FCoE / SCSI Block access
  • Converged Network Adapter
  • NIC with FCoE support
ISCSI ISCSI Block access
  • ISCSI adapter (dependent or independent)
  • NIC (Software adapter)
NAS IP / NFS File level Network adapter

Objective 1.4 – Differentiate between NIOC and SIOC

NIOC = Network I/O Control
SIOC = Storage I/O Control

Network I/O Control allows you to determine and shape bandwidth for your vSphere networks. They work in conjunction with Network Resource Pools to allow you to determine bandwidth for specific types of traffic. You enable NIOC on a vSphere Distributed Switch and then set shares according to needs in the configuration of the VDS. This is a feature requiring Enterprise Plus licensing or higher. Here is what it looks like in the UI.

Storage I/O Control allows cluster wide storage I/O prioritization. You can control the amount of storage I/O that is allocated to virtual machines to get preference over less important virtual machines. This is accomplished by enabling SIOC on the datastore and set shares and upper limit IOPS per VM. SIOC is enabled by default on SDRS clusters. Here is what the screen looks like to enable it.

Objective 1.5 – Manage vCenter inventory efficiently

There are several tools you can use to manage your inventory easier. vSphere allows you to use multiple types of folders to hold your vCenter inventory. Folders can also be used to assign permissions and set alarms to objects. You can put multiple types of objects inside of a folder but only one type per folder. For example, if you had VMs inside a folder, you wouldn’t be able to add a host to it.

vApps is another way to manage objects. They can be used to manage other attributes as well. You can assign resources and even startup order with vApps.

You can use Tags and Categories to better organize and make your inventory searchable. You create them off the main menu. There is a menu item called Tags and Custom Attributes


You can create Categories such as “Operating Systems” and then Tags such as “Window 2012” and others. This sort of action will make your VMs easier to manage and search for things. You then can see the tags on the summary of the VM as shown here.



Tags can be used for rules on VMs too. You can see this (although a bit branded) by reading a blog post I wrote for Rubrik here.

Objective 1.6 – Describe and differentiate among vSphere HA, DRS, and SDRS functionality

HA is a feature designed for VM resilience. The other two, DRS and SDRS are for managing resources. HA stands for High Availability. HA works by pooling all the hosts and VMs into a cluster. Hosts are monitored and in the event of a failure, VMs are re-started on another host.

DRS stands for Distributed Resource Scheduling. This is also a feature used on a host cluster. DRS is a vSphere feature that will relocate VMs and make recommendations on host placement based on current load.

Finally, SDRS is Distributed Resource Scheduling for Storage. This is enabled on a Datastore cluster and just like DRS will relocate the virtual disks of a VM or make recommendations based on usage and I/O Load.

You can adjust whether or not DRS/SDRS takes any actions or just makes recommendations.

Objective 1.7 – Describe and identify resource pools and use cases

The official description of a resource pool is a logical abstraction for flexible management of resources. My unofficial description is a construct inside vSphere that allows you to partition and control resources to specific VMs. Resource pools partition memory and CPU resources.

You start with the root resource pool. This is the pool of resources that exists at the host level. You don’t see it, but it’s there. You create a resource pool under that that cords off resources. It’s also possible to nest resource pools. For example, if you had a company and inside that company you had departments, you could partition resources into the company and departments. This works as a hierarchy. When you create a child resource pool from a parent you are further diminishing your resources unless you allow it to draw more from further up the hierarchy.

Why use resource pools? You can delegate control of resources to other people. There is isolation between pools so resources for one doesn’t affect another. You can use resource pools to delegate permissions and access to VMs. Resources pools are abstracted from the hosts’ resources. You can add and remove hosts without having to make changes to resource allocations.

You can identify resources pools by their icon.


When you create a resource pool, you have a number of options you will need to make decisions on.

Shares – Shares can be any arbitrary number you make up. All the shares from all the resource pools added up will equal to a total number. That total number will be total of the root pool. For example. If you have two pools that each have 8000 shares, there are a total of 16,000 shares and each resource pool makes up half of the total, or 8,000/16,000. There are default options available as well in the form of Low, Normal, and High. Those will equal 1,000/2,000, and 4,000 shares respectively.

Reservations – This is a guaranteed allocation of CPU or memory resources you are giving to that pool. Default is 0. Reserved resources are held by that pool regardless if there are VMs inside it or not.

Expandable Reservation is a check box that allows the pool to “borrow” resources from its parent resource pool. If this is the parent pool, then it will borrow from the root pool.

Limits – specify the upper limit of what a resource pool can grab from either CPU or memory resources. When teaching VMware’s courses, unless there is a definite reason or need for it, you shouldn’t use limits. While shares only work when there is contention (fighting among VMs for resources) limits create a hard stop for the VM even if resources are high. Usually there is no reason to limit how much resources a VM would be able to use if there is no contention.

In past exams, there were questions asking you calculate resources given a number of resource pools. Make sure you go over how to do that.

Objective 1.8 – Differentiate between VDS and VSS

VDS and VSS are networking constructs in vSphere. VDS is Virtual Distributed Switch and VSS is Virtual Standard Switch.

Virtual Standard Switch is the base switch. It is what is installed by default when ESXi is deployed. It has only a few features and requires you to configure a switch on every host. As you can imagine, this can get tedious and difficult to make these exactly the same. Which is what you need to do in order for VM’s to seamlessly move across hosts. You could create a host profile template to make sure they are the same, but then you lose the dynamic nature of switches.

Standard Switches create a link between physical NICs and virtual NICs. You can name them essentially whatever you want, and you can assign VLAN IDs. You can shape traffic but only outbound. Here is a picture I lifted from the official documentation for a pictorial representation of a VSS.


VDSs on the other hand add a management plane to your networking. Why is this important? It allows you to control all your host networking through one UI. This does require a vCenter and a certain level of licensing. Enterprise Plus or higher unless you buy vSAN licensing. Essentially you are still adding a switch to every host, just a little bit fancier one that can do more things and you only have to change once.

There are different versions of VDS you can create which are based on the version they were introduced with. Each version has its own features. A higher version retains all the features of the lower one and adds to it. Some of those features include Network I/O Control (NIOC) which allows you to shape your bandwidth incoming and outgoing. VDS also includes a rollback ability so that if you make a change and it loses connectivity, it will revert the changes automatically.

Here is a screenshot of me making a new VDS and some of the features that each version adds:


Here is a small table showing the differences between the switches.

Feature vSphere Standard Switch vSphere Distributed Switch
VLAN Segmentation Yes Yes
802.1q tagging Yes Yes
NIC Teaming Yes Yes
Outbound traffic shaping Yes Yes
Inbound traffic shaping No Yes
VM port blocking No Yes
Private VLANs No Yes (3 Types – Promiscuous, Community, Isolated)
Load Based Teaming No Yes
Network vMotion No Yes
NetFlow No Yes
Port Mirroring No Yes
LACP support No Yes
Backup and restore network configuration No Yes
Link Layer Discovery Protocol No Yes
NIOC No Yes

Objective 1.9 – Describe the purpose of cluster and the features it provides

A vSphere cluster is a group of ESXi host machines. When grouped together, vSphere aggregates all of the resources of each host and treats it like a single pool. There are a number of features and capabilities you can only do with clusters. Here is a screenshot of what you have available to you. I will now go over them.


Under Services you can see DRS and vSphere Availability (HA). You also see vSAN on the list, as vSAN requires a cluster as well. We’ve already covered HA and DRS a bit but there are more features in each.

DRS

DRS Automation – This option lets vSphere make VM placement decisions or recommendations for placement. I trust them with Fully Automated as you can see in the window above. There are a few situations here and there where you might not want to, but 90% of the time I would say trust it. The small use cases where you might turn it off might be something like vCD deployments, but you could also just turn down the sensitivity instead. You have the following configuration options:

Automation

  • Automation Level – options are Fully Automated, Partially Automated and Manual. Fully automated provides placement at VM startup and moves VMs as needed based on Migration Threshold. Partially Automated places the VM at startup and makes recommendations for moving but doesn’t actually move without approval. Manual will only make recommendations and requires you to accept them (or ignore).
  • Migration Threshold – This is how sensitive the cluster is to resource imbalance. It is based on a scale of 1-5, 5 being the most sensitive. If you set it to 5, if vSphere thinks there is any benefit to moving the VM to a different host, it will do so. 1 is lazy and won’t move anything unless it has to satisfy cluster constraints. 3 is default and usually a good balance.
  • Predictive DRS – Using real-time metrics and metrics pulled in through vRealize Operations Manager, vSphere tried to predict (based on past performance) when additional resources might be needed by a VM and move it to a host that can provide them.
  • Virtual Machine Automation – This allows you to override DRS settings for individual VMs.

Additional Options

  • VM Distribution – This allows you to try to spread the number of VMs evenly through your cluster hosts. This prevents any host from being too heavy with VMs even though it might have the resources to support them.
  • Memory Metric for Load Balancing – This load balances your VMs across hosts based on consumed memory instead of active memory. This can bite you if you overcommit a host’s memory if all your hosts actually start using the memory you have assigned to them. So don’t overcommit if you use this setting.
  • CPU Over-Commitment – You can limit the amount of over-commitment for CPU resources. This is done on a ratio basis. (20 vCPUs : 1 physical CPU for example)

Power Management

  • DPM – Distributed Power Management (should be Dynamic Power Management ). This allows you to keep the hosts turned off unless they are needed to satisfy resource needs. This saves power in your datacenter. It will use Wake-On-LAN, IPMI, iDRAC, or iLO to turn the hosts on. You can override individual hosts.
  • Automation Level – You can set this to Manual or Automatic
  • DPM Threshold – Just like DRS Migration Threshold, this changes sensitivity on a scale of 1-5, with 5 being the most sensitive. If resource utilization gets high, DPM will turn on another host to help with the load.

vSphere Availability (HA)

There are a number of configuration options to configure. Most defaults are decent if you don’t have a specific use case. Let’s go through them.

  • Proactive HA – This feature receives messages from a provider like Dell’s Open Manage Integration plugin and based on those messages will migrate VMs to a different host due to impending doom of the original host. It can make recommendations on the Manual mode or Automatically. After all VMs are off the host, you can choose how to remediate the sick host. You can either place it in maintenance mode, which prevents running any workloads on it. You can also put it in Quarantine mode which will allow it to run some workloads if performance is affected. Or a mix of those with…. Mixed Mode.
  • Failure Conditions and responses – This is a list of possible host failure scenarios and how you want vSphere to respond to them. This is better and give you wayyy more control than in the past.
  • Admission Control – What good is a feature to restart VMs if you don’t have enough resources to do so? Not very. Admission Control is the gatekeeper that makes sure you have enough resources to restart your VMs in the case of host failure. You can ensure this a couple of ways. Dedicated failover hosts, cluster resource percentage, slot policy, or you can disable it. Dedicated hosts are like a dedicated hot spare in a RAID. They do no work or run no VMs until there is a host failure. This is the most expensive (other than a failure itself). Slot policy takes the largest VM’s CPU and the largest VM’s memory (can be two different VMs) and makes that into a “slot” then it determines how many slots your cluster can satisfy. Then it looks at how many hosts can fail and still keep all VMs powered on. Cluster Resources Percentage looks at total resources needed and total available and tries to keep enough to lose a certain number of hosts you specify. You can also override and set a specific percentage to reserve. For any of these policies, if the cluster can’t satisfy needed VMs it will prevent new VMs from turning on.
  • Heartbeat Datastores – This is used to monitor hosts and VMs when the HA network as failed. Using this it can determine if the host is still running or if a VM is still running by seeing the lock files. This automatically tries to make sure that it has at least 2 datastores that all the hosts have connectivity to. You can specify more or specific datastores to use.
  • Advanced Options – You can use this to set advanced options for the HA Cluster. One might be setting a second gateway to determine host isolation. To use this you will need to set two options. 1) das.usedefaultisolationaddress and 2) das.isolationaddress[…] The first specifies not to use the default gateway and the second sets additional addresses.

Clusters allow for more options then I’ve already listed. You can set up Affinity and Anti-Affinity rules. These are rules setup to keep VMs on certain hosts, or away from others. You might want a specific VM running on a certain host due to licensing or for a specific piece of hardware only a specific host has. Anti-affinity rules might be setup for something like Domain Controllers. You wouldn’t place them on the same host for availability reasons, so you would setup an Anti-Affinity rule so that both of them would always be on different hosts.

EVC Mode is also a cool option enabled by clusters. EVC or Enhanced vMotion Compatibility allows you to take different generation hosts and still allows you to migrate them. Different generation processors have different features and options on them. EVC masks the newer ones so there is a level feature set. This means you might not receive all the benefits of a newer processors though. And a lot of newer processors are more efficient therefore lower clock speed. If you mask off those efficiencies, then you are just left with the lower clock speeds. Be mindful of that when you use it. You can enable it on a per VM basis making it more useful.

Objective 1.10 – Describe virtual machine (VM) file structure

A VM is nothing more than files and software. Hardware is emulated. It makes sense to understand the files that make up a VM then. Here is a picture depicting files you might see in a VM folder lifted from VMware’s book.


Now as for an explanation of those files.

  • .vmx file – This is the file vSphere uses to know what hardware to present. This is essentially a list of the hardware and locations of other files (like the virtual disk). It is also the file used when adding a VM to vSphere inventory.
  • .vswp – This file is what vSphere uses much the same way Microsoft uses a page file. When it runs out of actual physical memory or experiences contention on the host, it will use this file to make up the difference. As expected, since this is using a disk instead of RAM, it will be much slower.
  • .nvram – This file emulates a hardware BIOS for a VM.
  • .log – These are log files for the individual VM. It captures actual errors from the VM such as when a Microsoft Windows machine blue screens (crashes). These can be used for troubleshooting purposes. The file name increments vSphere maintains up to 6 log files at a time. vSphere will delete the oldest file first as it needs to.
  • .vmtx – This only occurs if the VM is a template. In that case the. vmx will change to a. vmtx
  • .vmdk – This is the disk descriptor file. No actual data from the VM is housed here. Rather the location of the blocks of the actual disk and other information about it are found inside.
  • -flat.vmdk – This is the actual data of the VM. This is hidden unless you look in the CLI. If the VM has multiple disks there will be more than one of this and the. vmdk
  • .vmsd – This is the snapshot list. If there are no snapshots, then this file is empty.
  • -delta.vmdk – this file is the delta disk if there is a active snapshot. The original flat-vmdk is frozen and all I/O is routed to this -delta instead.
  • -.ctk – Not shown in the graphic above, this is the Change block tracking file. This is used for programs like vSphere Data Protection or other backup programs.
  • -.lck – Also not shown in the graphic, this is a lock file placed in the directory showing that the VM is turned on (or the host thinks it is).

Objective 1.11 – Describe vMotion and Storage vMotion technology

There are several ways to move VMs around in your environment. vMotion and Storage vMotion are two types of migration. The first thing I do, when I taught this, was ask, what do you really need to move to move a VM? The main piece of what make up a VM is the memory. CPU resources are used briefly. When you perform a vMotion, what you are really doing is just moving active memory to a different host. The new host will then start working on tasks with the CPU. All pointers in the files that originally point to the first host have to be changed as well. So how does this work?

  1. First copy pass of the memory is moved over the new host. All users continue to use the VM on the old host and possibly make changes. vSphere will note these changes in a modified memory bitmap on the source host.
  2. After the first pass happens, the VM is quiesced or paused. During this pause, the modified memory bitmap data is copied to the new host.
  3. After the copy, the VM begins running on the new host. A reverse ARP is sent that notifies everyone that this is where the VM is now and forward requests to the new address.
  4. Users now use the VM on the new host.

Storage vMotion is moving the VM files to another datastore. Let’s go through the steps

  1. Initiate the svMotion in the UI.
  2. vSphere uses something called the VMkernel data mover or if you have a storage array that supports vSphere Storage APIs Array Integration or VAAI to copy the data.
  3. A new VM process is started
  4. Ongoing I/O is split using a “mirror driver” to be sent to the old and new vmdks while this is ongoing.
  5. vSphere cuts over to the new VM files.

This is slightly different than the vMotion process as it only needs one pass to copy all the files due to using the mirror driver.

There is one other type of migration called Cross-Host vSphere vMotion or Enhanced vMotion depending on who you ask. This is a combination of vMotion and svMotion at the same time. This is also notable because this allows you to migrate a VM while using local storage.

There are limitations on vMotion and svMotion. You need to be using the same type of CPUs (Intel or AMD) and the same generation, unless you are using EVC. You should also make sure you don’t have any hardware that the new host can’t support. CD-ROMs etc. vMotion will usually perform checks before you initiate it and let you know if there are any issues. You can migrate up to 4 VMs at the same time on a 1Gbps or 8 VMs on a 10Gbps network per host. 128 concurrent vMotion is the limit per VMFS datastore.