The official description of a resource pool is a logical abstraction for the flexible management of resources (same exact definition as vSphere 6.x). My unofficial description is an object inside of vSphere that allows you to partition and control resources to specific VMs. Resource pools partition memory and CPU.
Everyone starts with the root resource pool. This is a pool of resources that exists at the host level. You don’t see it, but it’s there. You create a resource pool under that that slices off resources. It’s also possible to nest resource pools. For example, if you had a company and inside that company you had departments, you could partition resources into the company and departments. This works as a hierarchy. When you create a child resource pool from a parent, you are further diminishing your resources – unless you allow it to draw more from further up the hierarchy.
Why use resource pools? You can delegate control of resources to other people. There is isolation between pools, so resources for one doesn’t affect another. You can use resource pools to delegate permissions and access to VMs. Resource pools are abstracted from the hosts’ resources. You can add and remove hosts without having to make changes to resource allocations.
You can also use resource Pools to divide resources to departments that have paid for them. If a department has paid for 50% of a new server, you can set up a resource pool to guarantee that the department receives those resources.
You can identify resources pools by their icon.
Resource Pools are created to slice off resources. You can have reservations on Resource Pools as well, but you can do a bit more. You can have expandable reservations to borrow resources from its parent if it needs to. This picture shows what you can configure when you create a CPU and Memory Resource Pool
You can also assign shares on an individual VM basis
To assign disk shares, you can look at the individual VM
You can also assign shares and manage network resources on Virtual Distributed Switches with Network I/O Control enabled.
Shares – Shares can be any arbitrary number you make up. All the shares from all the resource pools added up will equal to a total number. That total number will be a total of the root pool, for example. If you have two pools that each has 8000 shares, there are a total of 16,000 shares, and each resource pool makes up half of the total, or 8,000/16,000. There are default options available as well in the form of Low, Normal, and High. Those will equal 1,000/2,000 and 4,000 shares, respectively.
Reservations – This is a guaranteed allocation of CPU or memory resources you are giving to that pool. The default is 0. Reserved resources are held by that pool regardless if there are VMs inside it or not.
Expandable Reservation is a checkbox that allows the pool to “borrow” resources from its parent resource pool. If this is the parent pool, then it will borrow from the root pool.
Limits – specify the upper limit of what a resource pool can grab from either CPU or memory resources. When teaching VMware’s courses, unless there is a definite reason or need for it, you shouldn’t use limits. While shares only work when there is contention (fighting among VMs for resources), limits create a hard stop for the VM even if resources are high. Usually, there is no reason to limit how much resources a VM would use if there is no contention.
In past exams, exam questions were asking you to calculate resources given several resource pools. Make sure you go over how to do that.
Monitoring resources of both your vCenter Server Appliance and vSphere appliance is done from several places. There are many different products out there besides to do this as well. vRealize Operations Manager is one such tool. From within vSphere itself, there are several tools to do this. First, let’s cover the vCenter Server Appliance.
From within the VCSA VAMI (on port 5480), there is a Monitoring pane you can use to see resources your vCenter is consuming.
As shown in the screenshot above, you can monitor CPU, memory, disks, network, and even the database. From within the HTML5 client, you can monitor the vCenter VM by going to the VM and then clicking on Monitor, as shown here.
You can look at the different resources for the VM and change time periods. You can also monitor the vCenter Server by using “top” on the VM console, as shown here.
To monitor resources on the rest of the environment, this is accomplished by clicking on the VM and then clicking the monitor tab. Just like for the vCenter Server. This can be done for any of the VMs, cluster, or host. If you want to monitor the hosts via CLI, you can use ‘esxtop’ to do that. This is what that looks like.
Tools used for performance monitoring are precisely the ones shown above. Other tools can be used, as well. These include vRealize Operations Manager. vROPs integrates intimately with vSphere and shows much information on your environment. From within your HTML5 client, you can find info like this:
When you bring up vRealize Operations Manager, you get a lot more info.
Network I/O Control allows you to determine and shape bandwidth for your vSphere networks. They work in conjunction with Network Resource Pools to allow you to determine the bandwidth for specific types of traffic. You enable NIOC on a vSphere Distributed Switch and then set shares according to needs in the configuration of the VDS. This is a feature requiring Enterprise Plus licensing or higher. Here is what it looks like in the UI.
The traffic types that are shown here are there by default. You can make changes to them by clicking on one of the types and then clicking ‘Edit”. When you click edit, a screen appears where you can choose shares, reservations, and limits.
You can create new network resource types by clicking on Network Resource Pool and then ‘Add.’ This allows you to create a new pool that has a Reservation quota. You then would assign a VM to that pool. This group slices off bandwidth from the Virtual Machine system type, so you need to setup bandwidth reservation for that group first.
Storage I/O Control allows cluster-wide storage I/O prioritization. You can control the amount of storage I/O allocated to virtual machines to get preference over less critical virtual machines. This is accomplished by enabling SIOC on the datastore and set shares and upper limit IOPS per VM. SIOC is enabled by default on SDRS clusters. Here is what the screen looks like to enable it.
Once SIOC is enabled on the datastore, you can either set shares and limits on individual VM disks, or you can set up a Storage Policy and apply it to VMs to control performance. Here is a picture of one way you might set up a storage policy.
VMware can preserve a Point in Time or PIT for a VM. This process freezes the original virtual disk and creates a new Delta disk. All I/O is now routed to the Delta disk. If data is needed that still exists on the original disk, it will need to go back to that to retrieve data. So now, you are accessing two disks. Over time you can potentially double the size of the original disk as you make changes and new I/O. The original 10 GB disk becomes 20 GB over 2 disks. If you create additional snapshots, you create new Delta disks, and it continues.
Now that we understand a bit more about them, we see the limitations inherent. This tool was never meant to be a backup. It was designed to revert the VM to the original (if needed) after small changes. Most backup tools DO use snapshots as part of their process, but only for the amount of time needed to copy the data off, and then the snapshot is consolidated back again. Here are a few Best Practices from VMware on how to use them.
Depending on what version you are starting from, this can be a significant undertaking. One major hurdle could be hardware compatibility. VMware has made several tools available to navigate your upgrade. The first is the vSphere Assessment Tool. You can find more about that tool and also how to download it here.
Once you have checked your hardware and workloads and they can move, the next step is making sure all your VMware products are compatible with each other. You can find that information using the VMware Product Interoperability Matrix here.
The next step is figuring out the order of upgrading. If it’s just a simple VMware environment with no additional products, there are very few steps needed. They are
And you’re done. If there are additional products in the environment, you should look at the Knowledge Base Article VMware has made available to determine the update sequence here.
That brings us to the close of another section.