Objective 4.2: Perform vCenter Server Upgrades

To wrap up upgrade processes and things, we are going to go over vCenter Upgrades. The following points will be covered:

  • Identify steps required to upgrade a vSphere implementation
  • Identify upgrade requirements for vCenter
  • Upgrade vCenter Server Appliance (VCA)
  • Identify the methods of upgrading vCenter
  • Identify/troubleshoot vCenter upgrade errors

Identify steps required to upgrade a vSphere implementation

There are many things to think about for your vCenter and vSphere architecture. Especially now that we have the split of new types of Roles. The Platform Services Controller and the vCenter Role. You have the options of creating an Embedded installation which has all the roles installed on one server, or you can do an External Installation with a separation of the roles. There are advantages and disadvantages of each of these installations. Namely:

Embedded:

Advantages

  1. Connection between the vCenter and the PSC (Platform Services Controller) is not over the network and is not subject to issues associated with DNS and connectivity
  2. Licensing is cheaper (if installed on Windows machines)
  3. Fewer Machines to keep track of and manage
  4. You don’t need to think about distributing loads with a load balancer across Platform Services Controllers

Disadvantages

  1. There is a Platform Services Controller for each product – This consumes more resources
  2. The model is suitable for small-scale environments

vCenter with External Platform Services Controller:

    Advantages

  1. Less Resources consumed by the combines services in the Platform Services Controller, reducing the footprint and reduced maintenance
  2. Your environment can consist of more vCenter Server instances

Disadvantages

  1. The connection between the vCenter/s and Platform Services Controller is over the network and is subject to any issues with connectivity or DNS
  2. You need more Windows licenses (if using Windows)
  3. You must manage more virtual machines or physical – causing more work for you, the admin

The actual steps for the upgrade process are as follows

  1. Read the vSphere release notes… This should go without saying. There are a lot of services going on in the background, you don’t want to have to hurt your current setup (which brings us to Step 3- Backup your configuration)
  2. Verify that your system vSphere hardware and software requirements
  3. Backup your current configuration including your DB
  4. If your vSphere system includes VMWare solutions and/or plugins, verify they will work with the version you are upgrading to. Think about all them. It is a bad day if you upgrade and then realize your backup software won’t work with the new version.
  5. Upgrade vCenter Server

Concurrent upgrades are not supported and upgrade order matters. You will need to give this due consideration if you have multiple vCenters or services that are not installed on the same physical or virtual server.

Identify upgrade requirements for vCenter

The upgrade requirements will in part depend on your current setup. Do you have the Windows version? Or the Appliance? Do you have the Full on SQL server, Express? And so on. Documentation will be your best friend here, but we are going to go over the highlights.

For Windows Server PreReqs:

  • Synchronize the clocks on the machines running the vCenter Server 5.x services
  • Verify the DNS name of the machines running vCenter are valid and accessible from the other machines
  • Verify that if the user you are using to run the vCenter services is an account other than a Local System Account, it has the following permissions 1) Member of Administrators Group 2) Log on as a Service and 3) Act as part of the OS
  • Verify the connection between the vCenter and the Domain Controller

When you run the installer it will perform the following checks on its own

  • Windows Version
  • Minimum Processor Requirements
  • Minimum Memory Requirement
  • Minimum Disk Requirements
  • Permissions on the selected install and data directory
  • Internal and External Port availability
  • External Database version
  • External Database connectivity
  • Administrator privileges on the Windows System
  • Any credentials you enter
  • vCenter 5.x servers

The next thing you will need to think about it disk space. Depending on what type of deployment model you are going with, the requirements change. An embedded will require about 17 GB minimum. If you are using an external PSC, you will need that 17GB on the one machine but you will need 4GB minimum on the external PSCs.

Hardware Requirements again depend on the type of installation you require (based on size). A PSC will require 2 CPUs and 2 GB of RAM regardless – since it is scaling out vs scaling up. The others are based on the size:

  • Tiny (10 or under Hosts and 100 or under VMs) = 2 CPUs and 8 GB of RAM,
  • Small (up to 100 Hosts and 1000 VMs) = 4 CPUs and 16 GB RAM
  • Medium (up to 400 Hosts and 4000 VMs) = 8 CPUs and 24GB RAM
  • Large (up to 1000 Hosts and 10,000 VMs) =16 CPUs and 32 GB RAM

You will also need a 64-Bit Windows OS to put this on. The earliest version that will work is Windows 2008 SP2. You will also need a 64 bit DSN to connect to your Database.

Those are all the normal things you consider when simply deploying the machine. What does it do when you upgrade it though? Well there is a decent amount going on behind the scenes. The database schema is upgraded; the old Single Sign-On will be migrated to the new Platform Services Controller. And then you have the upgrade of the normal vCenter server software. Some of the upgrades depend on your current version.

  • For vCenter 5.0 you can choose to configure either an embedded or external PSC during the upgrade.
  • For vCenter 5.1 or 5.5 with all services deployed on a single machine, you can upgrade to a vCenter with an Embedded PSC.
  • For vCenter 5.1. or 5.5 with a separate SSO server, you will need to upgrade that to a PSC first
  • If you have multiple instances of vCenter installed, concurrent upgrades are not supported and order does matter.

The following information is a good check list to have before upgrading, as they will ask you for these information items.

Upgrade vCenter Server Appliance (VCA)

This is a bit simpler in my opinion, than the Windows version. There are still a few gotchas you need to be mindful of however. You need to make sure that you are running at least vCenter 5.1 Update 3, or 5.5 Update 2 before you can do an upgrade to 6.0. So if you are not at least at those levels, you will need to update those first to the needed version. In order to do this, it is really simple. Go to the IP or URL of the vCenter Appliance and port 5480. When you login, go to the Update tab and click on Check Updates

Then go ahead and click on Install Updates – You are asked to confirm and after you click yes, it will start.

A reboot is required afterwards for the changes to take effect.

Now that you are at a required level for you to be able to upgrade, you will need to have the VCSA install ISO and the Client Integration Plugin installed on your computer. Then open up the ISO (or burn it to a CD) and run the vcsa-setup.html file

You want to do an upgrade – So go ahead and click on that.

You will next need to accept the EULA
Now you need to tell it the host you are going to deploy the appliance to

The rest of the setup is just as if you are going to deploy a new appliance (because you are) with the addition of one screen. Where you tell it where the source appliance is and user name and password for it, so that it can copy the configuration over.

Identify the methods of upgrading vCenter

As of currently, the only supported method is using the user interface based installer (the web page) – Found on KB2109772
As far as the Windows version, you would use the regular installer. Depending on the deployment method you already have (embedded PSC or external)

Identify/troubleshoot vCenter upgrade errors

So as with most things, the best thing to do when things go wrong, is to look at the logs. If there are any error messages, that might be helpful as well. The log you will want to look at is the installation logs. There are a couple of ways you can go about this. If the install errored out before it fully finished, you can leave the check box selected on the screen for collect logs and it will save it in a zip on your desktop. In the Windows Server the logs will be located at:

%PROGRAMDATA%\VMware\CIS\logs directory, usually C:\ProgramData\VMware\CIS\logs

    %TEMP% directory, usually C:\Users\username\AppData\Local\Temp

You can open the files in the above locations in a text editor such as Notepad++ to look for clues. The Appliance houses the log files in a little different location, since the machine is Linux. First you need to access the appliance. You can do this via SSH or if you have direct access to the appliance (like through the console in the Windows Client). Either way once you get access, you will need to log in and get a command line prompt. If you are not already at a PI Shell prompt, run pi shell to get to the Bash prompt. Then run the vc-support.sh script to get a support bundle. You can then export it from the /var/tmp folder. Either to your desktop or you can cat or vi the firstbootStatus.json file to see which services failed.

You can also grab logs from the ESXi host by running the vm-support command in the ESXi shell or SSH or you can connect via the Windows Client and export logs from there. There are a lot of possible errors – you can go over a few in the Upgrade guide here: vSphere Upgrade Guide .

Next up… Resource Pools.

Objective 4.1: Perform ESXi Host and Virtual Machine upgrades

Here we are again, starting Objective 4.1. The following points will be covered:

  • Identify upgrade requirements for ESXi hosts
  • Upgrade a vSphere Distributed Switch
  • Upgrade Virtual Machine hardware
  • Upgrade an ESXi host using vCenter Update Manager
  • Stage multiple ESXi host upgrades
  • Determine whether an in-place upgrade is appropriate in a given upgrade scenario

So to begin with we should go over a few things before performing an upgrade. Your infrastructure is, I am guessing rather important to you and your company’s livelihood. So we need to take a measured approach to it. We can’t just go ahead and stampede into this without giving it an appropriate amount of thought and planning. There is an order to which components to upgrade first, and there are a number of ways to do it. And for the love of God, make sure your hardware is on the Hardware compatibility list…..before you begin. I just had a case this week from a customer that upgraded to 6 and now will need to downgrade as their server was not on the HCL and they couldn’t get support on it. The PDFs have a pretty good approach to the upgrade process

  1. Read the Release Notes
  2. Verify ALL your equipment you are going to use or need to use, is on the HCL
  3. Make sure you have a good backup of your VM’s as well as your configuration
  4. Make sure the plug-ins or other solutions you are using are compatible with vSphere 6
  5. Upgrade vCenter Server
  6. Upgrade Update Manager
  7. Upgrade Hosts
  8. You can actually stop here, but if you go on you could upgrade your HW version on the VMs etc. and any appliances.

So now we will look directly at the ESXi hosts for upgrading. I am assuming you have gone through the above. In addition to this, make sure there is sufficient disk space for the upgrade. And if there is a SAN connected to the host, for safety sake, it might be best to detach that before performing the upgrade so that you don’t make the mistake of choosing the wrong datastore to overwrite and create a really bad day. If you haven’t already, you will want to move off any remaining VMs or shut them down. When the system is done rolling through the upgrade, apply your licenses. If it wasn’t successful then if you had backed it up, you can restore. Otherwise you can reload it with the new version.

You can upgrade an ESXi 5.x directly to 6.0 a couple of different ways. You can upgrade via Update Manager, interactive upgrade, scripted upgrade, auto deploy, or esxcli command line. A host can also have third part VIBs (VMWare Installation Bundles). They could be driver packages or enhancement packs such as Dell’s Open Manage Plugin. Occasionally you can run into a problem upgrading the host with these installed. You can choose to do a number of things at that point. You can remove the VIB and then retry, or you can create a custom installer ISO.

Upgrade a vSphere Distributed Switch

This is a relatively painless process. You can upgrade from 4.1 all the way to 6.0 if you so choose. You need to make sure your hosts support it. If you have even one host attached to this distributed switch that is at a lower level, that is the level you will need to make the distributed switch. For example, if you have all 6.0 hosts except for one 5.5 host, you will either need to make your distributed switch a 5.5 or remove that host from the vDS. One other thing to be mindful of, you can’t downgrade.

To upgrade, navigate to your networking and then to the distributed switch you wish to upgrade

Now you need to click on upgrade

That will open this dialog

This will show you the versions you can upgrade the switch to. After you click on Next, it will check version against the hosts that are attached to the vDS. It will let you know if any hosts are not able to be upgraded to that version.

Upgrade Virtual Machine hardware

In order to upgrade your virtual machine hardware, you can right-click on the VM you need to upgrade and click on compatibility and then either Upgrade VM Compatibility or Schedule VM Upgrade – as seen here:

This is irreversible and will make it incompatible with previous versions of ESXi. The next screen will ask you what version you want to upgrade to.

This will then upgrade it as soon as you scheduled it.

Upgrade an ESXi host using vCenter Update Manager

To upgrade a host to vSphere 6, you will need to follow the following procedure:

  1. Configure Host Maintenance Mode Settings – Host updates might require you to reboot the host and enter maintenance mode before they can be applied. Update Manager will do this, but you will need to configure what to do with the VMs and the host if it fails to enter maintenance mode
  2. Configure Cluster Settings – The remediation can happen in sequence or in parallel. Temporarily disable DPM, HA Admission Control, and Fault Tolerance to make sure your remediation is successful
  3. Enable Remediation of PXE booted ESXi hosts (if you have them)
  4. Import Host Upgrade Images and create Host Upgrade Baselines
  5. Create a Host Baseline Group – Create a baseline group with the 6 image that you want to apply
  6. Attach Baselines and Baseline groups to Objects – You will need to attach the baseline in Update Manager to the objects you want to upgrade
  7. Manually Initiate a Scan of the ESXi hosts – You will need to do this for Update Manager to pay attention to these hosts
  8. View Compliance Information for vSphere objects – Make sure the baseline that you want to apply is correct for the hosts
  9. Remediate Hosts Against an Upgrade Baseline / Groups – NOW the fun starts, this is where Update Manager starts to apply the patches and upgrades to the ESXi hosts.

Stage multiple ESXi host upgrades

In order to stage patches or upgrades the process is going to be relatively the same as what we just went through. The difference would be you are going to have multiple hosts that are attached to the baseline and instead of Remediating, you will just be Staging. Staging allows you to load the patches or upgrades to the hosts without actually rebooting or applying them yet. This will let you decide when the best time is to take executive action against them. Possibly on the weekend or some other designated time. The actual process is lifted from the guide and transplanted here:

Procedure

1 Connect the vSphere Client to a vCenter Server system with which Update Manager is registered and select Home > Inventory > Hosts and Clusters in the navigation bar.
2 Right click a datacenter, cluster, or host, and select Stage Patches.
3 On the Baseline Selection page of the Stage wizard, select the patch and extension baselines to stage.
4 Select the hosts where patches and extensions will be applied and click Next.

If you select to stage patches and extensions to a single host, it is selected by default.

5 (Optional) Deselect the patches and extensions to exclude from the stage operation.
6 (Optional) To search within the list of patches and extensions, enter text in the text box in the upper-right corner.
7 Click Next.
8 Review the Ready to Complete page and click Finish.

Determine whether an in-place upgrade is appropriate in a given upgrade scenario

This question can encompass a number of things. The hardware requirements aren’t extremely different from ESXi 5.5 to 6. You will need to take into account if you are going to use the same boot type, are you already using something on 5.5 that isn’t yet compatible with 6, or are you more interested in upgrading machines period because your current ones are long in the tooth (old)? All these questions and more are going to have to be considered by you and the members of your team in order to answer if you are going to do an in-place upgrade vs migrate to new systems or installs over the top of the current. There are valid reasons of course for all of them and it all depends on your environment and your vision for it.

This one was the longest to get out so far. Lots of things going on in personal life. I hope to get back to a normal blogging schedule really soon.

-Mike

Objective 3.5 Setup and Configure Storage I/O Control

Moving on to our last sub point in the Storage Objectives, we are going to cover Storage I/O Control. We will cover the following:

  • Enable/Disable Storage I/O Control
  • Configure/Manage Storage I/O Control
  • Monitor Storage I/O Control

Enable / Disable Storage I/O Control

This is relatively easy to do. Click on the datastore you want to modify and then click on Manage > Settings > and then General. Underneath Datastore Capabilities, you can click on Edit and then uncheck the Enable Storage I/O Control.

Configure/Manage Storage I/O Control

The same place is where you can configure it. As you see above you can change the congestion threshold or set a manual latency threshold.

Monitor Storage I/O Control

You can do this on a datastore basis by clicking on the datastore and then clicking on Monitor and then Performance. You can monitor the datastore’s space or Performance. If you click on Performance, you are treated to a lot of graphs detailing everything from latency to IOPs. And that is how you can monitor it – Here is a bonus picture.


And that concludes the Storage Section. Up Next is Virtual Machine Management! So get ready for some fun!

Objective 3.4 Perform Advanced VMFS and NFS Configurations and Upgrades

Continuing along our Storage Objectives, we now are going to cover VMFS and NFS datastores and our mastery of them. We will cover in this objective:

  • Identify VMFS and NFS Datastore properties
  • Identify VMFS5 capabilities
  • Create/Rename/Delete/Unmount a VMFS Datastore
  • Mount/Unmount an NFS Datastore
  • Extend/Expand VMFS Datastores
  • Place a VMFS Datastore in Maintenance Mode
  • Identify available Raw Device Mapping (RDM) solutions
  • Select the Preferred Path for a VMFS Datastore
  • Enable/Disable vStorage API for Array Integration (VAAI)
  • Disable a path to a VMFS Datastore
  • Determine use case for multiple VMFS/NFS Datastores

Time to jump in.

Identify VMFS and NFS Datastore Properties, and Capabilities

Datastores are containers we create in VMWare to hold our files for us. They can be used for many different purposes including storing Virtual Machines, ISO images, Floppy Images, and so on. The main difference between NFS and VMFS datastores is their backing. The storage behind the datastore. For VMFS, you are dealing with block level storage, whereas with NFS you are dealing with a share from a NAS that already has a filesystem on it. These each have their own pros and cons, and specific abilities that can used and worked with.

There have been a few different versions of VMFS that have been released since inception. They include VMFS2, VMFS3, and VMFS5. It is to be noted though, that if you still have VMFS2 you can no longer read or write to them as of ESXi 5 and you can’t create VMFS3 in ESXi6, though you can read and write to them.

VMFS5 provides many enhancements over its predecessors. Among them include the following:

  • Greater than 2TB storage devices for each extent
  • Support of virtual machines with large capacity disks larger than 2TB
  • Increased resource limits such as file descriptors
  • Standard 1MB block size
  • Greater than 2TB disk size for RDM
  • Support of small files of 1KB
  • Scalability improvements on devices supporting hardware acceleration
  • Default use of ATS-only locking mechanism (previously SCSI reservations were used)
  • Ability to reclaim physical storage space on thin provisioned storage devices
  • Online upgrading process that allows you to upgrade to the latest version of VMFS5 without having to offline the datastore

Datastores can be local or shared. They are made up of the actual files, directories and so on, but they also contain mapping information for all these objects called metadata. Metadata is also frequently changed when certain operations take place.

In a shared storage environment, when multiple hosts access the same VMFS datastore, specific locking mechanisms are used. This is one of the biggest advantages of VMFS5, called ATS or Atomic Test and Set, better known as hardware assisted locking. This supports discrete locking per disk sector. This is vs normal windows volume locking where a single server will lock the volume for use, preventing some of the cooler features allowed for by VMFS.

Occasionally you will have a datastore that still uses a combination of ATS and SCSI reservations. One of the issues with this is time. When metadata operations occur the whole storage device is locked vs just the disk sectors involved. Then when the operation has completed, other operation can continue. As you can imagine if enough of these occur, you can start creating disk contention and your VM performance might suffer.

You can use the CLI to show what system you are using on a VMFS datastore. At a CLI prompt type in the following:

Esxcli storage vmfs lockmode list

You can also specify a server by adding –server=<servername>

You can view VMFS and NFS properties by going doing the following:

  1. From the Home Screen, click on Storage
  2. On the left, choose the datastore (VMFS or NFS) you are interested in
  3. In the middle pane, click on Manage and then click on Settings You will now see some variation of the following

This will show you a number of different properties that may be useful for you.

Now let’s cover NFS a bit. NFS is just a bit different that VMFS. Instead of directly accessing block storage, a NFS client exports a share out over TCP/IP to access a NFS volume located on that NFS server. VMWare now supports 3 and 4.1 versions of NFS. The ESXi hosts can mount and use the volume for their needs. Most of the features are supported on NFS volumes including:

  • vMotion and Storage vMotion
  • High Availability and Distributed Resource Scheduling
  • Fault Tolerance and Host Profiles
  • ISO images
  • Virtual Machine snapshots
  • Virtual machines with large capacity disks larger than 2TB
  • Multi-pathing (4.1 only)

Create/Rename/Delete/Unmount a VMFS Datastore

There are a number of ways to do these things, here is one of them.

  1. While on the Storage tab in the Navigation Pane, right click on the host or the cluster and then click on Storage and then New Datastore

  1. You are presented with the above. You will then have a window pop up that notifies you of the location it will be created. Click Next and you will be presented with this window

  1. Click on VMFS and click on Next
  2. Now you are asked to put in a name for your new Datastore and choose the host that has the device accessible to it

  1. Click Next and it will show you partition information for that device, if there is any and will make sure you want to wipe everything to replace it with a VMFS datastore. Click next again and then Finish on the next screen

Renaming the datastore is as simple as right-clicking on the datastore you wish to rename and then click Rename. Deleting and unmounting is the same way. Beware that Deleting will delete the datastore and everything on it, while unmounting just makes it inaccessible.

Mount/Unmount a NFS Datastore

This is as easy as creating a VMFS datastore. Just a few different steps in there. Follow the same first steps as before to create a new VMFS datastore. When it asks about the VMFS though, there are two more options underneath there. NFS and VVols.

Next of course you will need to fill out a few different details. The next detail you will need to fill in is the version of NFS

Next window

Under this you would put the server (NAS) IP address and the export share folder and what you are going to call the datastore. You can also mount the NFS share as read-only. Next screen is asking you what hosts are going to have access to the share.

Last screen is just a summary. Click Finish and you are done.

Extend/Expand VMFS Datastores

There are two ways to make your datastore larger. You can expand your existing array or you can use another LUN (not already used for a datastore) to team together and create a larger datastore. To do either one, navigate to the datastore you wish to increase and then right click on it and click on Increase Datastore Capacity. If you have a datastore that can be expanded, it will show up in the next screen. If not, then it will remain blank. Depending on your layout and your previous selections you will have the opportunity to use another LUN or to expand existing one.

Place a VMFS Datastore in Maintenance Mode

Maintenance Mode is a really cool feature. You have to have a Datastore Cluster in order to make it work though. If you right click on a normal datastore, the option to put it in maintenance mode is greyed out. Once you have created a datastore cluster and have the disks inside it, you can right click on the datastore and click on Maintenance Mode and Click enter maintenance mode.

Identify available Raw Device Mapping (RDM) Solutions

Raw Device Mapping provides a mechanism for a virtual machine to have direct access to a LUN on a physical storage system. The way this works is that when you create a RDM, it creates a mapping file that contains metadata for managing and redirecting disk access to the physical device. Lifting a picture from the official PDF to pictorially represent it.

There are a few situations where you might need a RDM.

  1. SAN Snapshots or other layered applications that use features inherent to the SAN
  2. In any MSCS clustering scenario that spans physical hosts.

There are two types of RDM, physical compatibility and virtual compatibility. Virtual Compatibility allows an RDM to act exactly like a virtual disk file, including the use of snapshots. Physical Compatibility mode allows for lower level access if needed.

Select the Preferred Path for a VMFS Datastore

This is relatively easy to do, It can only be done under the Fixed PSP policy. Click on the datastore you want to modify underneath the navigation pane. Then click Manage and Settings and then Connectivity and Multi-pathing.

Then Click on Edit Multi-pathing

Now you can choose your Preferred Path.

Enable/Disable vStorage API for Array Integration (VAAI)

VAAI or hardware acceleration is enabled by default on your host. If for some reason you want to disable it, you would need to browse to your host in the Navigator. Then you would click on the Manage, then Settings, and under System you would click on Advanced System Settings. Change the value of any of the three following to 0

  • VMFS3.HardwareAcceleratedLocking
  • DataMover.HardwareAcceleratedMove
  • DataMover.HardwareAcceleratedInit

Disable a path to a VMFS Datastore

To disable a path to a datastore, you need to navigate to the datastore you are interested in again, and then click on Manage, then Settings, then Connectivity and Multi-pathing. Scroll Down under the Multi-pathing details and you will see Paths. Click on the Path you want to disable and then click Disable

Determine Use Cases for Multiple VMFS/NFS Datastores

There are a number of reasons to have more than one LUN. Most SAN arrays will adjust queues and caching based per LUN. Having too many VMs on a single LUN could overload IO to those same disks. Also when you are creating HA clusters, it typically wants at least 2 LUNs to maintain heartbeats to. All these are valid reasons for creating more than a single LUN.

And this is me signing off again, till the next time.

Objective 3.3 Configure Storage Multi-pathing and Failover

Once again we return to cover another objective. It has been hectic lately, with delivering my first ICM 6 class and also delivering a vBrownBag the same week on one of the future objectives. Now we are back though and trying to get in the swing of things again. Here are the sub-points we will be covering this time:

  • Configure/Manage Storage Load Balancing
  • Identify available Storage Load Balancing options
  • Identify available Storage Multi-pathing Policies
  • Identify features of Pluggable Storage Architecture (PSA)
  • Configure Storage Policies
  • Enable/Disable Virtual SAN Fault Domains

Without further ado lets dig in. Storage Multi-pathing is a cool feature that allows you to load balance IO and also allows for path failover in the event of failure. Storage plays a rather important path in our virtualization world, so it stands to reason that we would want to make sure that it is as fast and as reliable as possible. We have 3 multi-pathing options available by default, but have the ability to add more depending on the storage devices we have in our environment. For example, Equallogic adds a new Multi-pathing PSP when you are using their “MEM” kit. The default policies are as follows:

  • VMW_PSP_FIXED
  • VMW_PSP_MRU
  • VMW_PSP_RR

Defining which of these we want to use we can choose how we load balance and failover paths. Of course we should probably get a better understanding of what they do in order to make the best choice.

  • Fixed is where the host will use the designated preferred path if configured. Otherwise, it will select the first working path discovered at system boot time. This is the default policy for most active-active SANs. Also if you specify a path to preferred and it becomes unavailable, when it becomes available again, it will revert back.
  • Most Recently Used selects the path that it used most recently. If that path becomes unavailable, it will choose an alternative. If the path becomes available again, it will not revert. MRU is the default path for active-passive arrays
  • Round Robin uses an automatic path selection algorithm rotating through all the active paths when connecting to active-passive or all available paths when connecting to active-active arrays. RR is the default for a number of arrays

How do we configure and manage these? We will need to do the following

  1. Browse to the host in the navigator
  2. Click on Manage tab and then Storage
  3. Click on the Storage Device or Protocol Endpoint
  4. Click on the Device you want to manage
  5. Under the Properties tab, scroll down to Edit Multipathing and click



  1. Choose the Multi-Pathing type you want and click Ok



And that is how we configure it.

Moving on to features of the PSA or Pluggable Storage Architecture now. To manage storage multipathing, ESXi uses a collection of Storage APIs. This is also known as the Pluggable Storage Architecture. It consists of the following pieces

  • NMP or Native Multi-Pathing Plug-In. This is the generic VMWare multipathing module
  • PSP or Path Selection Policy. This is how VMWare decides on a path for a given device
  • SATP or Storage Array Type Plug-In. This is how VMWare handles path failover for a given array

Using the storage APIs, as mentioned before other companies can introduce their own Pathing Policy. Here is a good picture on how everything aligns

Storage Array Type Plugins or SATPs are run in conjunction with VMWare NMP and are responsible for array-specific operations. ESXi offers a SATP for every type of array it supports, and it also provides for default SATPs for active-active and active-passive and ALUA arrays. These are used to monitor the health of each path, report changes in them, and do necessary actions to failover in case of something going wrong.

Storage Policies are next on the agenda. Storage Policies are a mechanism that allow you to specify storage requirements on a per VM basis. If you are using VSAN or Virtual Volumes, then this policy can also determine how the machine is provisioned and allocated within the storage resource to guarantee the required level of service.

In vSphere 5.0 and 5.1 storage policies existed as storage profiles and had a different format and were not quite as useful. If you previously used them, when you upgrade to 6.0 they would be upgraded to the new Storage Policy.

There are 2 types of storage policies. You can base them on abilities or Storage-Specific data services, or you can use reference tags to group them. Let’s cover both of them a little more in depth.

Rules based on Storage-Specific Data Services

These rules are based on data services that storage entities such as Virtual SAN and Virtual Volumes advertise. To supply this information, these products use storage providers called VASA providers. This will surface the possible capabilities that are available to VMWare for you to put in your Storage Policy. Some examples of this include capacity, performance, availability, redundancy and so on.

Rules based on Tags

Rules based on tags reference datastore tags that you associate with specific datastores. Just like tags on other objects that, you as an administrator can apply, you can apply tags to a datastore. You can apply more than one tag to a single datastore. Once you apply a tag to a datastore, it will show up in the Storage Policies interface, which you can then use to define the rules for the storage policy.

So how do we use these? There are a number of steps we need to perform to enable these and apply them. The very first thing we need to do is to enable Storage Policies on a host or Cluster. To do that, perform the following steps:

  1. In the web client, click on Policies and Profiles and then VM Storage Polices
  2. Click on the Enable Storage policies icon (looks like a scroll with a check mark)
  3. Select vCenter instance and all the clusters and hosts that are available will appear
  4. Choose a host or cluster and then click on Enable

Now you can define your VM Storage policy. For the first one we will work on the Tag based policy.

  1. Browse to a datastore and then click on Manage and then Tags



  1. Click on the new tag icon
  2. Select the name of the Tag and Description. Under Category, choose New Category
  3. Under Category Name, type in the Name you desire and also what type of object you will associate it with

  4. When you are done creating it, you will now need to assign it to a datastore – this is the tag with green arrow pointing to the right
  5. Your tag should show up here. Highlight and click Assign
  6. You should now see your tag show up

Now you can create a storage policy based on this tag. You do that by navigating to the same place where you enabled the policies.

  1. Click on Policies and Profiles and then VM Storage Polices
  2. Click the Create a New VM Storage Policy

  3. Click Next twice and you will have a Rule Set 1 and you have the ability to create one on data services or based on tags. Choose the one on tags

  4. Under Category, choose the one you had previously created, and then the tag that you have created.

  5. You can add more rules if you have them but if not click on next

  6. When you click on Next you are now shown the datastores that are compatible (because you have associated the tag to them

  7. A summary appears and then you can click on Finish

The next thing to do to make this active is to apply it to a VM. You can do this when you create the VM or afterwards. If you are applying it afterwards, you will need to do it by going to the Settings of the VM and then clicking the little arrow in front of the Hard Disk. Next choose the Storage Policy you want from the drop down box

Now you have achieved your goal. You can go through the same steps with either policy.

Last thing we will need to go over is Enabling and Disabling Fault Domains for Virtual SAN. I don’t have a VSAN setup (if anyone wants to contribute toward my home lab fund let me know..J ) But if you did, you would enable them underneath the settings for the cluster. Then you would go to Manage and then VSAN. Underneath that sub category you will find Fault Domains. This is where you would create/enable/disable Fault Domains.

And thus concludes another objective. Next up, VMFS and NFS datastores. Objective 3.4


Objective 3.2: Configure Software Defined Storage

Back again!! This time we are going to go over the relatively new VSAN. Now VMWare Virtual SAN originally came out in 5.5 U1, but has been radically overhauled for 6.0 (massive jump from VSAN 1.0 -6 J ) So what are we going to go over and have to know for the exam?

  • Configure/Manage VMware Virtual SAN
  • Create/Modify VMware Virtual Volumes (VVOLs)
  • Configure Storage Policies
  • Enable/Disable Virtual SAN Fault Domains

Now this is not going to be an exhaustive guide and information about VSAN and its use, abilities, and administration. Cormac Hogan and Rawlinson Rivera already do that so well, there is no point. I have Cormac’s Blog to the right. He has more info than you probably can process. So we will concern ourselves with a high overview of the product and the objectives.

Here comes my 50 mile high overview. VSAN is Software Defined Storage. What does this mean? While you still have physical drives and cards, you are pulling them together and creating logical containers (virtual disks) and such through software and the vmkernel. You can setup VSAN as a hybrid or all flash cluster. In the hybrid approach you have magnetic media used as the storage media and the flash is the cache. In all flash, the flash disks are used for both jobs.

When you setup the cluster, you can do it on a new cluster or you can add the feature to an existing cluster. When you do that it will take the disks and aggregate them into a single datastore available to all hosts in the VSAN cluster. You can later expand this by adding more disks or additional hosts with disks to the cluster. The cluster will run much better if all the hosts in the cluster are as close as possible to each other, just like your regular cluster. You can have machines that are just compute resources and have no local datastore or disk groups and still be able to use the VSAN datastore.

In order for a host to contribute its disks it has to have at least one SSD and one spindle disk. Those disks form what is known as a disk group. You can have more than one disk group per machine, but each one needs at least the above combination.

Virtual SAN manages the data in the form of flexible data containers called object. VSAN is known as Object Based Storage. An object is a logical volume that has its data and metadata spread distributed across the cluster. There are the following types of objects:

  • VM Home Namespace = this is where all configuration files are stored such as the .vmx, log files, vmdks and snapshot delta description files and so on.
  • VMDK = The .vmdk stores the contents of the Virtual Machine’s HD
  • VM Swap Object = This is created when the VM turns on, just like normal
  • Snapshot Delta VMDKs =Are created when snapshots are taken of the VM, each Delta is an object
  • Memory Object = Are the objects created when memory is selected as well when the snapshot is taken

Along with the objects, you have metadata that VSAN uses called a Witness. This is a component that serves as a tiebreaker when a decision needs to be made regarding the availability of surviving datastore components, after a potential failure. There may be more than one witness depending on your policy for the VM. Fortunately, this doesn’t take up much space – approximately 2MB on the old VSAN 1.0 and 4MB for version 2.0/6.0

So the part of the larger overall picture is being able to apply policies granularly. You are able to specify on a per VM basis how many copies of something you want, vs a RAID 1 where you have a blanket copy of everything regardless of its importance. SPBM (Storage Based Policy Management) allows you to define performance, and availability in the form of this policy. VSAN ensure that you have a policy for every VM. Whether it is the default or a specific one for the VM. For best results you should create and use your own, even if the requirements are the same as the default.

So those of us who used and read about VSAN 1.0 how does the new version differ? Quite a lot. This part is going to be lifted from Cormac’s Site. (Just the highlights)

  1. Scalability – Because vSphere 6.0 can now support 64 hosts in a cluster, so can VSAN
  2. Scalability – Now supports 62TB VMDK
  3. New on-disk Format (v2) – This allows a lot more components per host to be supported. It leverages VirtsoFS
  4. Support for All-Flash configuration
  5. Performance Improvement using the new Disk File System
  6. Availability improvements – You can separate racks of machines into Fault Domains
  7. New Re-Balance mechanism – rebalances components across disks, disk groups, and hosts
  8. Allowed to create your own Default VM Storage Policy
  9. Disk Evacuation granularity – You can evacuate a single disk now instead of a whole disk group
  10. Witnesses are now smarter – They can exercise more than a single vote instead of needing multiple witnesses
  11. Ability to light LEDs on disks for identification
  12. Ability to mark disks as SSD via UI
  13. VSAN supports being deployed on Routed networks
  14. Support of external disk enclosures.

As you can see this is a huge list of improvements. Now that we have a small background and explanation of the feature, let’s dig into the bullet points.

Configure/Manage VMware Virtual VSAN

So first, as mentioned before, there are a few requirements that need to be met in order for you to be able to create and configure VSAN.

  • Cache = You need one SAS or SATA SSD or PCIe Flash Device that is at least 10% of the total storage capacity. They can’t be formatted with VMFS or any other file system
  • Virtual Machine Data Storage = For Hybrid group configurations, make sure you have at least one NL-SAS, SAS, or SATA magnetic drive (sorry PATA owners). For All Flash disk groups, make sure you have at least one SAS, SATA, or PCIe Flash Device
  • Storage Controller = One SAS or SATA Host Bus Adapter that is configured in pass-through or RAID 0 mode.
  • Memory = this depends on the number of disk groups and devices that are managed by the hypervisor. Each host should contain a minimum of 32GB of RAM to accommodate for the maximum 5 disk groups and max 7 capacity devices per group
  • CPU = VSAN doesn’t take more than about 10% CPU overhead
  • If booting from SD, or USB device, the device needs to be at least 4GB
  • Hosts = You must have a minimum of 3 hosts for a cluster
  • Network = 1GB for Hybrid solutions, 10GB for all flash solutions. Multicast must be enabled on the switches – Only IPv4 is supported at this time
  • Valid License for VSAN

Now that we got all those pesky requirements out of the way. Let’s get started on actually creating the VSAN. The first thing we will need to do is create a VMkernel port for it. There is a new option for traffic as of 5.5U1 which is ….. VSAN. You can see it here:

After you are done, it will show up as being enabled you can check by looking here:

Now that is done, you will need to enable the cluster for VSAN as well. This is done under the cluster settings or when you create the cluster to begin with.

You have the option to automatically add the disks to the VSAN cluster, or if you leave in manual you will need to add the disks yourself, and new devices are not added when they are installed. After you create it you can check the status of it on the Summary page.

You can also check on the individual disks and health and configure disk groups and Fault Domains under the Manage > Settings > Virtual SAN location.

Here is a shot from my EVO:RAIL with VSAN almost fully configured

The errors are because I don’t have the VLANs fully configured for them to communicate yet. There is a lot more we could work on with VSAN but I don’t have the blog space nor the time. So moving on….

Create/Modify VMware Virtual Volumes (VVOLs)

First a quick primer on VVols. What are these things called Virtual Volumes? Why do we want them when LUNs have served us well for so long? So if you remember, one of the cool advantages of VSAN is the ability to assign policies on a per VM basis. But VSAN is limited to only certain capabilities. What if we want more? In comes VVols. Use VVols and a storage SAN that supports them, you can apply any abilities that SAN has on a per VM basis. Stealing from a VMWare blog for the definition, “VVols offer per-VM management of storage that helps deliver a software defined datacenter”. So what does this all mean? In the past you have had SANs with certain capabilities, such as deduplication, or a specific RAID type, etc. You would need to have a really good naming system or DB somewhere to list which LUN was which. However now, we have the ability for us to just set a specific set of rules for the VM in a policy and then it will find the LUN matching that set of rules for us. Pretty nifty huh?

So how do you create and modify these things now? The easiest way is to create a new datastore just like you would a regular VMFS or NFS.

  1. Select vCenter Inventory Lists > Datastores
  2. Click on the create new datastore icon
  3. Click on Placement for the datastore
  4. Click on VVol as the type

  5. Now put in the name you wish to give it, and also choose the Storage Container that is going to back it. (kind of like a LUN – you would have needed to add a Protocol Endpoint and Storage Container before getting to this point)
  6. Select the Hosts that are going to have access to it
  7. Finish

Kind of working this backwards but how do you configure them? You can do the following 4 things:

  1. Register the Storage Provider for VVols = Using VASA you configure communication between the SAN and vSphere. Without this communication nothing will work in VVols.
  2. Create a Virtual Datastore = this is to be able to create the VVol
  3. Review and Manage Endpoints = this is a logical proxy used to communicate between the virtual volumes and the virtual disks that they encapsulate. Protocol endpoints are exported along with their associated storage containers by the VASA provider.
  4. (Optional) If you host uses ISCSI-based transport to communicate with protocol endpoints representing a storage array, you can modify the default multipathing policy associated with it.

Configure Storage Policies

At the heart of all these changes is the storage policy. The storage policy is what enables all this wonderful magic to happen behind the scenes with you, the administrator, blissfully unaware. Let’s go ahead and define it as VMWare would like it defined: “A vSphere storage profile defines storage requirements for virtual machines and storage capabilities of storage providers. You use storage policies to manage the association between virtual machines and datastores.”

Where is it found? On the home page in your web client under…. Policies and Profiles. Anti-Climatic I know. Here is a picture of that when you click on it.

This gives you a list of all the profiles and polices associated with your environment. We currently are interested only in the Storage Policies so let us click on that. Depending on what products you have setup, yours might look a little different.

You can have Storage policies based off one of the following:

  • Rules based on Storage-Specific Data Services = these are based on data services that entities such as VSAN and VVols can provide, for example deduplication
  • Rules based on Tags = these are tags you, as an administrator, associate with specific datastores. You can apply more than one per datastore

Now we dig in. First thing we are going to need to do is to make sure that storage policies are enabled for the resources we want to apply them to. We do that by clicking on the Enable button underneath storage policies

When enabled you will see the next screen look like this (with your own resource names in there of course)

We can go ahead and create a storage policy now and be able to apply it to our resources. When you click on Create New VM Storage Policy, you will be presented with this screen:

Go ahead and give it a name and optionally a description. On the next screen we will define the rules that are based on our capabilities

In this one I am creating one for a Thick provisioned LUN


Unfortunately none of my datastores are compatible. You can also configure based off of tags you associate on your datastores.

Enable/Disable Virtual SAN Fault Domains

This is going to be a quick one as I am a bit tired of this post alreadyJ. In order to work with Fault Domains you will need to go to the VSAN Cluster and then click on Manage and Settings. Next on the left hand side you will see a Fault Domains. Click on it. You now have the ability to segregate hosts into specific fault domains. Click on add (+) to create a fault domain and then add the hosts to it you want. You will end up with a screen like this:

Onwards and Upwards to the next post!!


Objective 3.1: Manage vSphere Storage Virtualization

Wait, wait wait…. Where did objective 2.2 go? I know… I didn’t include it since I have already gone over in previous objectives all the questions asked on it. So moving on to Storage.

So we are going to cover the following objective points.

  • Identify storage adapters and devices
  • Identify storage naming conventions
  • Identify hardware/dependent hardware/software iSCSI initiator requirements
  • Compare and contrast array thin provisioning and virtual disk thin provisioning
  • Describe zoning and LUN masking practices
  • Scan/Rescan storage
  • Configure FC/iSCSI LUNs as ESXi boot devices
  • Create an NFS share for use with vSphere
  • Enable/Configure/Disable vCenter Server storage filters
  • Configure/Edit hardware/dependent hardware initiators
  • Enable/Disable software iSCSI initiator
  • Configure/Edit software iSCSI initiator settings
  • Configure iSCSI port binding
  • Enable/Configure/Disable iSCSI CHAP
  • Determine use case for hardware/dependent hardware/software iSCSI initiator
  • Determine use case for and configure array thin provisioning

So let’s get started

Identify storage adapters and devices

Identifying storage adapters is easy. You have your own list to refer to, the Storage Devices view. To navigate to it do the following:

  1. Browse to the Host in the navigation pane
  2. Click on the Manage tab and click Storage
  3. Click Storage Adapters
    This is what you will see

As you can see, identification is relatively easy. Each adapter is assigned a VMHBAXX address. They are under larger categories and also you are given the description of the name of the hardware, i.e. Broadcom ISCSI adapter. You can find out a number of details about the device by looking down below and going through the tabs. As you can see, one of the tabs is devices that is under that particular controller. Which brings us to our next item, storage devices.

Storage Devices is one more selection down from Storage Adapters, so naturally you would navigate the same way to it and then just click on Storage Devices instead of adapters. And now that we are seeing those, now it’s time to move on to naming conventions to understand why they are named how they are.

Identify Storage Naming Conventions

You are going to have multiple names for each type of storage and device. Depending on the type of storage, ESXi will use a different convention or method to name each device. The first type is SCSI Inquiry identifiers. The host uses a command to query the device and uses the response to generate a unique name. They are unique, persistent, and will have one of the following formats:

  • Naa.number = naa stands for Network Address Authority and then it is followed by a bunch of hex number that identify vendor, device, and lun
  • T10.number = T10 is a technical committee tasked with SCSI storage interfaces and their standards (also plenty of other disk standards as well)
  • Eui.number – Stands for Extended Unique Identifier.

You also have the path based identifier. This is created if the device doesn’t provide the needed information to create the above identifiers. It will look like the following: mpx.vmhbaxx:C0:T0:L0 – this can be used just the same as the above identifiers. The C is for the channelr, the T is for target, and L is for the LUN. It should be noted that this identifier type is neither unique nor persistent and could change every reboot.

There is also a legacy identifier that is created. This is in the format vml.number

You can see these identifiers on the pages mentioned above and also at the command line by typing the following.

  • “esxcli storage core device list”

Identify hardware/dependent hardware/software iSCSI initiator requirements

You can use three types of ISCSI initiators in VMware. These are independent hardware, dependent hardware, and software initiators. The differences are as follows

  • Software ISCSI = this is code built into the VMkernel. It allows your host to connect to ISCSI targets without having special hardware. You can use a standard NIC for this. This requires a VMkernel adapter. This is also able to use all CHAP levels
  • Dependent Hardware ISCSI initiator – this device still depends on VMware for networking, ISCSI configuration, and management interfaces. This is basically ISCSI offloading. An example of this is Broadcom 5709 NIC. This requires a VMkernel adapter (this will show up as a NIC and a Storage adapter). This is able to use all CHAP levels
  • Independent Hardware ISCSI initiator – this device implements its own networking, ISCSI config and management interfaces. An example of this is the Qlogic QLA4052 adapter. This does not require a VMkernel adapter (this will show up as a Storage Controller). This is only able to use unidirectional CHAP and use unidirectional unless prohibited by target CHAP levels

Compare and contrast array thin provisioning and virtual disk thin provisioning

You have two types of thin provisioning. The biggest difference between these are where the provisioning is going on at. The array can thinly provision the LUN. In this case it will present the total logical size to the ESXi host which may be more than the real physical capacity. If this is the case then there is really no way for your Esxi host to know if you are running out of space. This obviously can be a problem. Because of this, Storage APIs – Array Integration was created. Using this feature and a SAN that supports it, your hosts are aware of the underlying storage and are able to tell how your LUNs are configured. The requirements for this are simply ESXi 5.0 or later and a SAN that supports Storage APIs for VMware.

Virtual Disk Thin Provisioning is the same concept but done for the Virtual Hard Disk of the Virtual Machine. You are creating a disk and telling it that it has more space than it actually might. Due to this, you will need to monitor the status of that disk in case your VM Operating System starts trying to use that space.

Describe zoning and LUN masking practices

Zoning is a Fiber Channel Concept meant to restrict servers from seeing storage arrays. Zone will define which HBA’s, or Cards in the Server, can connect to which Storage Processors on the SAN. LUN Masking on the other hand will only allow certain hosts to see certain LUNs.

With ESXi hosts you want to use single initiator zoning or single initiator-single target zoning. The latter is preferred. This can help prevent misconfigurations and access problems.

Rescan Storage

This one is pretty simple. In order to take count of new storage or see changes on existing storage, you may want to rescan your storage. You can do two different operations from the gui client: 1) Scan for New Storage Devices, 2) Scan for new VMFS Volumes. The 1st will take longer that the second. You can also rescan for storage at the command line by performing the following command at CLI

esxcli storage core adapter rescan –all – this will rescan for new Storage Devices
vmkfstools –V
– This will scan for new VMFS volumes

I have included a picture of the webclient with the button circled in red that is to rescan.

Configure FC/iSCSI LUNs as ESXi boot devices

ESXi supports booting from Fiber Channel or FCoE LUNs as well as ISCSI. First we will go over Fiber Channel.

Why would you want to do this first off? There are a number of reasons. Among them you remove the need to have storage included in the servers. This makes them cheaper and less prone to failure as hard drives are the most likely component to fail. It is also easier to replace the servers. One server could die and you can drop a new one in its place, change zoning and away you go. You also access the boot volume through multiple paths, whereas if it was local, you would generally have one cable to go through and if that fails, you have no backup.

You do have to be aware of requirements though. The biggest one is to have a separate boot LUN for each server. You can’t use one for all servers. You also can’t multipath to an active-passive array. Ok so how do you do it? On a FC setup:

  1. Configure the array zoning and also create the LUNs and assign them to the proper servers
  2. Then using the FC card’s BIOS you will need to point the card to the LUN and add in any CHAP credentials needed
  3. Boot to install media and install to the LUN

On ISCSI, you have a little different setup depending on what kind of initiator you are using. If you are using an Independent Hardware ISCSI initiator, you will need to go into the card’s BIOS to configure booting from the SAN. Otherwise with a Software or Dependent, you will need to use a network adapter that supports iBFT. Good recommendations from VMware include

  1. Follow Storage Vendor Recommendations (yes I got a sensible chuckle out of that too)
  2. Use Static IPs to reduce the chances of DHCP conflicts
  3. Use different LUNs for VMFS and boot partitions
  4. Configure proper ACLs. Make sure the only machine able to see the boot LUN is that machine
  5. Configure a diagnostic partition – with independent you can set this up on the boot LUN. If iBFT, you cannot

Create an NFS share for use with vSphere

Back in 5.5 and before you were restricted to using NFS v3. Starting with vSphere 6, you can now use NFS 4.1 VMware has some recommendations for you about this as well.

  1. Make sure the NFS servers you use are listed in the HCL
  2. Follow recommendations of your storage vendor
  3. You can export a share as v3 or v4.1 but you can’t do both
  4. Ensure it’s exported using NFS over TCP/IP
  5. Ensure you have root access to the volume
  6. If you are exporting a read-only share, make sure it is consistent. Export it as RO and make sure when you add it to the ESXi host, you add it as Read Only.

To create a share do the following:

  1. On each host that is going to access the storage, you will need to create a VMkernel Network port for NFS traffic
  2. If you are going to use Kerberos authentication, make sure your host is setup for it
  3. In the Web Client navigator, select vCenter Inventory Lists and then Datastores
  4. Click the Create a New Datastore icon
  5. Select Placement for the datastore
  6. Type the datastore name
  7. Select NFS as the datastore type
  8. Specify an NFS version (3 or 4.1)
  9. Type the server name or IP address and mount point folder name (or multiple IP’s if v4.1)
  10. Select Mount NFS read only – if you are exporting it that way
  11. You can select which hosts that will mount it
  12. Click Finish




Enable/Configure/Disable vCenter Server storage filters

When you look at your storage and when you add more or do other like operations, VMware by default has a set of storage filters it employs. Why? Well there are four filters. The filters and explanation for them are as follows

  1. config.vpxd.filter.vmfsFilter = this filters out LUNs that are already used by a VMFS datastore on any host managed by a vCenter server. This is to keep you from reformatting them by mistake
  2. config.vpxd.filter.rdmFilter = this filters out LUNs already referenced by a RDM on any host managed by a vCenter server. Again this is protection so that you don’t reformat by mistake
  3. config.vpxd.filter.SameHostAndTransportsFilter = this filters out LUNs ineligible for use due to storage type or host incompatibility. For instance you can’t use Fiber Channel Extent and attach it to an ISCSI LUN
  4. config.vpxd.filter.hostRescanFilter = this prompts you to rescan anytime you perform datastore management operations. This tries to make sure you maintain a consistent view of your storage

And in order to turn them off you will need to do it on the vCenter (makes sense huh?) You will need to navigate to the vCenter object and then to the Manage tab.

You will then need to add these settings since they are not in there for you to change willy nilly by default. So click on the Edit and then type in the appropriate filter and type in false for the value. Like so

Configure/Edit hardware/dependent hardware initiators

A dependent hardware ISCSI still uses VMware networking and ISCSI configuration and management interfaces provided by VMware. This device will present two devices to VMware. Both a NIC and an ISCSI engine. The ISCSI will show up under Storage Adapters (vmhba). In order for it to work though, you still need to create a VMKernel port for it and to a physical network port. Here is a picture of how it looks underneath your storage adapters for a host.

There are a few things to be aware of while using a dependent initiator.

  1. When you are using the TCP Offload engine, you may not show activity or little activity on the NIC associated with the adapter. This is due to the host passing all the ISCSI to the engine and it bypasses the regular network stack
  2. If using the TCP offload engine, it has to reassemble the packets in hardware and there is a finite amount of buffer space. You should enable flow control in order to be able to better manage the traffic (pause frames anyone?)
  3. Dependent adapters will support IPv4 and IPv6

To setup and configure them you will need to do the following:

  1. You can change the alias or the IQN name if you want by going to the host and Manage > Storage > Storage Adapters and highlightling the adapter and then clicking Edit
  2. I am assuming you have already created a VMKernel port by this point. The next thing to do would be to bind the card to the VMKernel
  3. You do this by clicking on the ISCSI adapter in the list and then clicking on Network Port Binding below
  4. Now click on the add icon to associate the NIC and it will give you this window
  5. Click on the VMkernel you created and click Ok
  6. Go back to the Targets section now and add your ISCSI target. Then rescan and voila.

Enable/Disable software iSCSI initiator

For a lot of reasons you might want to use the software ISCSI initiator instead of the hardware one. For instance you might want to maintain an easier configuration where the NIC doesn’t matter. Or you just might not have the availability of dependent cards. Either way, you can use the software initiator to work your bits. “But Wait” you say, “Software will be much slower than hardware!” You would be correct perhaps with older revisions of ESX. However the software initiator is so fast at this point that there is not much difference between them. By default, however, the software initiator is not installed. You will need to install it manually. To do this you will need to go to the same place we were before, under Storage Adapters. Then while there, click on the add icon (+) and Click on Add Software ISCSI adapter. Once you do that, it will show up in the adapters and allow you to add targets and bind NICs to it just like the hardware ISCSI will. To disable it, just click on it and then under properties, down below, click on Disable.

Configure/Edit software iSCSI initiator settings
Configure iSCSI port binding
Enable/Configure/Disable iSCSI CHAP

I’m going to cover all these at the same time since they flow together pretty well. To configure your ISCSI initiator settings you would navigate to the ISCSI adapter. Once there all your options are down at the bottom under the Adapter Details. If you want to edit one, you would click on the tab that has the settings and click on Edit. Your Network Port Bindings is under there and is configured the same as we did before. The Targets is there as well. Finally CHAP is something we haven’t talked about yet. But it is there underneath the Properties tab under Authentication. Depending on what type of adapter you have, will change the type of CHAP available to you. Click on Edit under the Authentication section and you will get this window

As you probably noticed this is done on a per storage adapter level. So you can change it for different initiators. Keep in mind this is all sent via plain text so the security is not really that great. This is better used to mask LUNs off from certain hosts.

We have already pretty much covered why we would use certain initiators over others and also thin provisioning. So I will leave those be and sign off this post. (Which keep getting more and more loquacious)

Objective 2.2 – Configure Network I/O Control

After the last objective this one should be a piece of cake. There are only 4 bullet points that we need to cover for this objective. They are:

  • Identify Network I/O Control Requirements
  • Identify Network I/O Control Capabilities
  • Enable / Disable Network I/O Control
  • Monitor Network I/O Control

So first off what is Network I/O control? NIOC is a tool that allows you to reserve and divide bandwidth up in a manner you see fit for the VMs you deem. You can choose to reserve a certain amount of bandwidth or a larger percentage of the network resources for an important VM for when there is contention. You can only do this on a vDS. With vSphere v6, we introduce a new version of NIOC. NIOC v3. This new version of Network I/O control allows us to reserve a specific amount of memory for an individual VM. It also still uses the old standbys of reserves, limits, and shares. This also works in conjunction with DRS and admission control and HA to be sure that wherever the VM is moved to, it is able to maintain those characteristics. So let’s get in a little deeper.

vSphere 6.0 is able to use both NIOC version 2 and version 3 at the same time. One of the big differences is in v2 you are setting up bandwidth for the VM at the physical adapter level. Version 3, on the other hand, allows you to go in deeper and set bandwidth allocation at the entire Distributed switch level. Version 2 is compatible with all versions from 5.1 to 6.0. Version 3 though, is only compatible with vSphere 6.0. You can upgrade a Distributed Switch to version 6.0 without upgrading NIOC to v3.

Identify NIOC Control Requirements

As mentioned before, you need at least vSphere 5.1 for NIOC v2 and vSphere 6.0 for v3 of NIOC. You also need a Distributed Switch. The rest of the things are expected. You will need a vCenter in order to manage the whole thing, and rather important you need to have a plan. You should know what you want to do with your traffic before you just rush headlong into it, so you don’t end up “redesigning” it 10 times.

Identify NIOC control Capabilities

Using NIOC you can control and shape traffic using shares, reservations, and limits. You can also specify based on certain types of traffic. Using the built in types of traffic, you can adjust network bandwidth and adjust priorities. The types of traffic are as follows:

  • Management
  • Fault Tolerance
  • iSCSI
  • NFS
  • Virtual SAN
  • vMotion
  • vSphere Replication
  • vSphere Data Protection Backup
  • Virtual Machine

So we keep mentioning shares, reservations, and limits. Let’s go and define these now so we know how to apply them.

  • Shares = this is a number from 1-100 to reflect the priority of the system traffic type, against the other types active on the same physical adapter. For example, you have three types of traffic, ISCSI, FT, and Replication. You assign ISCSI and FT 100 shares and Replication 50 shares. If the link is saturated it will give 40% of the link to ISCSI and 40% to FT and 20% to Replication.
  • Reservation = This is the guaranteed bandwidth on a single physical adapter measured in Mbps. This cannot exceed 75% of the bandwidth that the physical adapter with the smallest bandwidth can provide. For example, if you have 2x 10Gbps NICs and 1x 1Gbps NIC the max amount of bandwidth you can reserve is 750Mbps. If the network type doesn’t use all of this bandwidth, the host will free it up for other things to use it – This does not include allowing new VMs to be placed on that host however. This is just in case the system actually does need it for the reserved type.
  • Limit = this is the maximum amount of bandwidth in Mbps or Gbps, that a system traffic type can consume on a single physical adapter.

So what has changed? The following functionality has been removed if you upgrade from 2 to 3:

  • All user defined network resource pools including associations between them and existing port groups.
  • Existing Associations between ports and user defined network resource pools. Version 3 doesn’t support overriding the resource allocation at a port level
  • CoS tagging of the traffic that is associated with a network resource pool. NIOC v3 doesn’t support marking traffic with CoS tags. In v2 you could apply a QoS tag (which would apply a CoS tag) signifying that one type of traffic is more important than the others. If you keep your NIOC v2 on the distributed switch you can still apply this.

Also be aware that changing a distributed switch from NIOCv2 to NIOCv3 is disruptive. Your ports will go down.

Another new thing in NIOCv3 is being able to configure bandwidth for individual virtual machines. You can apply this using a Network resource pool and allocation on the physical adapter that carries the traffic for the virtual machine.

Bandwidth integrates tightly with admission control. A physical adapter must be able to supply minimum bandwidth to the VM network adapters. And the reservation for the new VM must be less than the free quota in the pool. If these conditions don’t match the VM won’t power on. Likewise, DRS will not move a VM unless it can satisfy the above. DRS will migrate a VM in order to satisfy bandwidth reservations in certain situations.

Enable / Disable Network I/O Control

This is simple enough to do:

  1. Navigate to the distributed switch via Networking
  2. Right Click on the distributed switch and click on Edit Settings
  3. From the Network I/O Control drop down menu, select Enable
  4. Click OK.

To disable do the above but click, wait for it……Disable.

Monitor Network I/O Control

There are many ways to monitor your networking. You can go as deep as getting packet captures, or you can go as light as just checking out the performance graphs in the web client. This is all up to you of course. I will list a few ways and what to expect from them here.

  1. Packet Capture – VMware has a packet capture tool included with it. This tool is pktcap-uw. You can use this to output to .pcap and .pcapng files. You can then use a tool like Wireshark to analyze the data.
  2. NetFlow – You can configure a distributed switch to send reports to a NetFlow collector. Version 5.1 and later support IPFIX (NetFlow version 10)
  3. Port Mirroring – This allows you to take one port and send all the data that flows across it to another. This also requires a distributed switch

 

Objective 2.1 Configure Advanced Policies/Features and Verify Network Virtualization Implementation (Part 2)

Wrapping up this objective we are going to cover the following:

  • Configure LACP on Uplink Port Groups
  • Describe vDS Security Policies / Settings
  • Configure dvPort group blocking policies
  • Configure Load Balancing and failover policies
  • Configure VLAN / PVLAN settings
  • Configure traffic shaping policies
  • Enable TCP Segmentation Offload Support for a Virtual Machine
  • Enable Jumbo Frames support on appropriate components
  • Determine appropriate VLAN configuration for a vSphere implementation

Configure LACP on Uplink Port Groups

You most likely already know what LACP is. For those of you that don’t, however we will go over a brief definition of it. LACP stands for Link Aggregation Control Protocol and is part of the 802.3ad IEEE specification. This protocol allows you to take multiple links and bind them into a single link. “But wait!” you might say, isn’t that basically what load balancing does? Nope. The links used inside load balancing are still separate links. Meaning at any one time, your data stream will never equal more than the bandwidth of a single link. So if you had 2x1Gb Connections, you are only ever using 1Gb of speed for any data stream. LACP also known in some groups as EtherChannel or Bonding, or even occasionally Trunking (I don’t like using Trunking because it can mean a number of things), on the other hand, will try to use all the links for a single stream. Distributing it over all the links in the group.

LACP is only supported on a vDS. And you must configure your uplink Port Group a specific way. There are also a couple of other restrictions. LACP does not support Port Mirroring, does not exist in Host Profiles, and you can’t set one up between two nested hosts. One also important thing to note, was although under ESXi 5.1 you can only support LACP with IP Hash load balancing, starting with 5.5, all of the load balancing methods are supported. Now without further ado, let’s see how we create one.

  1. Go ahead to the Networking view from the Home screen
  2. Click on the vDS you are going to add the LAG to
  3. Then click on Manage and then LACP as in the picture
  4. Click on the plus symbol to add one (+)
  5. On the new screen that pops up now you will need to enter a name for the LAG
  6. You will also need to set up how many ports will be in it (these are going to be associated with your physical NICs) and the mode of the LACP (Active or Passive)
  7. You also have the ability to setup the Load balancing mode and if you need to attach to a VLAN or trunk it if there will be multiple VLANs going over this link – Here is the picture
  8. When all said and done, it will look like this
  9. In order to delete the LAG, highlight the one you wish to get rid of and click the red ‘X’

Describe vDS Security Policies / Settings

vDS Security Policies were already covered in a previous blog post, so I won’t go too far in depth of them here. But a basic listing is as follows:

  1. Promiscuous Mode – This when set to Accept allows you to detect all frames passed on the vDS that are allowed through the VLANs specified
  2. MAC Address Changes – If set to Reject, if the MAC address doesn’t match what is in the .vmx file, it drops the frame from coming in
  3. Forged Transmits – If set to Reject, any outbound frame that doesn’t match the adapter’s MAC is going to be dropped

Configure dvPort group blocking policies

DvPort Group blocking. This is the ability to go ahead and shut down all the ports on a Port Group. You can also block all traffic from a single port on a dvPort Group. Why would you want to do this? In my opinion this is meant if there is possibly a VM that has a virus on it and you need to shut it down quick, or you can use it for troubleshooting, or you can use it for testing purposes. This is obviously going to disrupt network flow to whichever you decide to apply this policy. I won’t go too far into it since it’s not really a difficult concept and it can be done at a port group level if you right click on the port group you are looking to block then click on Settings. Next go to Miscellaneous and click on the drop down for Block All Ports and choose ‘Yes’ and then click Ok. Here is your picture.

You can also navigate to the Port right click on that and go to settings and block that port.

Configure Load Balancing and failover policies

On the load balancing menu, we can choose from the following wonderful items:

  1. Route based on the originating virtual port – VMware will assign virtual ports with a physical network card and traffic from that virtual machine will always be forwarded to that physical card unless there is a failure of that adapter. Traffic for that virtual machine will also be received on that same physical card.
  2. Route based on IP Hash – This will take the source and destination IP address of each packet sent from the VM and associate that with an uplink. This creates CPU overhead.
  3. Route based on Source MAC Hash – In this policy the virtual machine’s MAC address will be used to map to a physical uplink. Traffic once again will use that same uplink for incoming and outgoing traffic unless something goes kaboom.
  4. Route based on physical NIC load – This is only available on vDS. What happens here is that if the uplink being used is at a load of 75% for more than 30 seconds it will move some of the traffic over to another uplink that has available bandwidth.

These can be accessed at a switch level or at a port group level (on a vDS in the web client you will need to configure on the port group level) you can access the settings on a Standard Switch and choose your Load balancing policy there or on the port group – Here is the needed picture on the vDS side

For Failover Policies you have the following options, Network Failure Detection, Notify Switches, Failback, and you also have the ability to choose your Active, Standby, and Unused uplinks. We will go over each of those to get a good description of what they are.

  1. Network Failure Detection – you have the option of Link Status only and Beacon Probing. Link Status will just rely on what the physical NIC reports to ESXi. This can detect failures as removed cables and physical switch power failure. Beacon Probing will send out beacon probes and receive them back on the other NICs. Using this information will allow it to determine link failure. NICs must be in either Active/Standby or Active/Active mode not Unused
  2. Notify Switches – All this option does is notify the physical switch if there is a failover. This allows for faster convergence on the switch when it has to switch traffic to a different uplink
  3. Failback – This option will determine whether you allow the NIC to return to active status after recovering from a failure. If set to yes, the current originally standby adapter will return to standby and the recovered NIC will go Active again
  4. Failover Order – determines in what order the NICs will failover, and if they are used at all

Configure VLAN / PVLAN settings

VLANs and PVLANs are both tools to do the same thing. Segregate your network. Using them, you can separate your networks into multiple pieces just like partitioning your hard drive. And just like chopping up your hard drive up into multiple pieces, you run into the same limitations. You have not increased the number or speed of the underlying structure. You still have one physical hard drive to serve your I/O needs. Likewise with your VLAN you have sectioned your network into multiple pieces, but you have not increased your bandwidth or made it any faster. So make sure when you use these tools that you are using them for the proper purpose. VLANs can be applied at multiple places. The three places are as follows:

  1. External Switch Tagging – All the tagging is performed by the physical switch. The ESXi host is oblivious of what is going on.
  2. Virtual Switch Tagging (VST) – This tagging is done by the virtual switch before leaving. You will need to set the VLAN ID specified on the port group
  3. Virtual Guest Tagging (VGT) – This is where the vnic inside the VM will do the tagging. The virtual port group will need to have an ID of 4095 to set the port group to trunking

Ok so that covers VLANs, what are PVLANs? PVLANs are an extension of VLANs. They have to have a special physical switch that can support them, and can only reside on a vDS. They are used to further segment the network. Seems a bit redundant, but hang in with me as I go into the types and it will perhaps explain a bit more why you might want to use them.

  1. Primary PVLAN – This is the original VLAN that is being divided. All other groups exist in the secondary. The only group in the primary PVLAN is the Promiscuous
  2. Secondary PVLAN – This only exists inside the primary. Each secondary has a specific PVLAN id associated with it and each packet traveling through it is tagged with an ID like it were a normal VLAN. The physical switch associates the behavior depending on the VLAN ID found in each packet
  3. Promiscuous PVLAN – exists in the Primary and can communicate with any of the secondary PVLANs and the outside network. Routers are typically placed here to route traffic
  4. Isolated – This is a type of secondary PVLAN that can only send packets to and from the promiscuous PVLAN. They can’t send packets to any other computers in the Isolated, or other PVLANs
  5. Community – this is the last sub-type of PVLAN left. With this one you can communicate to any virtual machine in the community PVLAN and also with the promiscuous VLAN

Alright we got the wall of text out of the way. So how do we configure these? Well to configure VLANs on a standard switch we will need to go to the host that we want to change the networking for and then go to manage and networking. Then we will click on the pencil to edit the port group we want to add VLANs to. When we do we get this screen here:

We can type directly in the VLAN ID box and away we go. We can also set it to trunk (4095) if we are going to give the VLAN duty to the virtual machine NIC.

For Distributed switches it is roughly the same way just where we need to access the port group is a little different. We need to navigate to the Distributed switch and then click on the port group that we need to tag. Then click on manage, then settings and click edit. This Window will now manifest itself:

And since we are already here, you can also set the PVLANs here as well. By clicking on the VLAN type you can get the additional types. The switch must be configured on the Distributed switch first however. You do that by right clicking on the Distributed switch and then click on Edit Private VLAN settings. Here is that picture:

As you can see above, you can choose the VLAN ID and also the IDs of the secondary PVLANs. You can also choose what type go to which ID. After this is done, you can now associate the port group with the VLAN, here is that picture:

VLAN ID is now able to choose which PVLAN you are associating with it.

Configure traffic shaping policies

Traffic shaping policies is a fancy way of saying we are going to direct traffic to do our bidding. Depending on the switch you are working on, you have the ability of working with Ingress (incoming) or Ingress and Egress (outgoing) traffic. I shouldn’t need to tell you at this point which has the expanded capability.

On Standard switches you can work with it on the switch, or the port group. To get there you would just need to edit the settings for either. Obligatory picture inserted here:

It gives you the options for Average Bandwidth, Peak Bandwidth, and Burst Size. I will first give you the really dry definitions of each and then try to simplify it a little bit.

  1. Average Bandwidth – This is the number of bits averaged over time to allow across a port
  2. Peak Bandwidth – Is the number of bits per second to allow across a port when it is sending a burst of traffic. This is not allowed to be smaller than the average
  3. Burst Size – is the maximum amount of bytes to allow in a burst. So if a port needs more bandwidth than specified by the average, it may be allowed to temporarily transmit data at a higher speed for the amount you allow

So to put this in other words, you are restricting a port to your average bandwidth unless you have a peak bandwidth that is higher. If you do, the virtual machine is allowed to hit the peak bandwidth for the amount of KB allowed in the burst size. And now for the picture of the dVS port group traffic shaping:

You can find this by navigating to the dVS under Networking and then choosing the dvPort Group you are interested in applying this to and editing the settings. You notice on the above that you have both incoming and outgoing traffic.

Enable TCP Segmentation Offload support for a virtual machine

What is TCP Segmentation Offloading? In normal TCP operation on your machine, the CPU will take large data chunks and split them into TCP segments. This is one more job for the CPU to do and can add up over time. If TSO is enabled on the host and transmission path however, this job is handed off to the NIC to do and frees CPU cycles up to work on more important things. Obviously your NICs will need to support this technology. By default, VMware is configured to have this on if your NICs support it. Occasionally you may want to toggle it. In order to do that you will need to go to Manage for the host. Then click on Advanced System Settings and go to the Net.UseHWTSO parameter to 1 to enable or 0 to disable.

Enable Jumbo Frames support on appropriate component

Very quick definition of Jumbo Frames, it is a MTU larger than 1500. Yes that means that 1501 MTU is technically a jumbo frame. Why do we want Jumbo frames? When we increase the size of each frame we send, the CPU has less of them to deal with for the same amount of data. This of course allows our CPU to give more attention to our VMs. This is a good thing.

There are a few things we need to make sure of when we enable Jumbo frames. We can’t just have them enabled on one particular device. We need it enabled end-to-end on every device or it won’t work. You will also run into problems with things like WAN accelerators and other such devices because a lot of them like to fragment the packets. You will also need to know what the particular settings are for your network devices and storage devices. Occasionally some of them will need you to have a larger frame set than you are pushing through ESXi in order to accommodate the frame. For Example in general you would set your MTU on a Force10 to 12k Frame size and on your ESXi host you would set it to 9k frame size. This would be to accommodate overhead on the frames.

Ok so where can we enable them and where in the ESXi host? We can actually enable them in three places. We can enable on a Switch (Standard or Distributed), vmkernel adapter, and on virtual machine’s NIC. So let’s start posting pictures

Distributed Switch
1. Navigate to the Networking and click on the Distributed Switch you want to modify
2. Right Click on the switch and click on settings and then Edit Settings
3. Click on Advanced and change MTU to size desired

Standard Switch
1. From the Home screen click on Hosts and Clusters and then navigate to the host you want to modify
2. Click on the Manage and then Networking sub-tab. Then click on the Virtual switches on the left
3. In the middle, click on the switch you wish to modify and then click on the pencil for it

4. Change MTU as desired

VMkernel Adapter
1. From where we were just a minute ago, take a small step down to VMkernel adapters (Hosts&Clusters ->Host->Manage->Networking)
2. Click on the VMkernel adapter and then click on the pencil
3. On the screen that pops up, choose NIC settings and change to the desired MTU

Virtual Machine
1. You will need to make sure that you have a VMXNet2 or 3 adapter set on the VM.
2. You will need to set the MTU inside the VM.
3. If you currently have an e1000 adapter you will need to copy the MAC and create a new VMXNet3 and copy the MAC to that and disable the old.

Determine appropriate VLAN configuration for a vSphere implementation

A lot of this is going to be based on an existing configuration (if you have one) and your network administrator will need to be involved. You know what the purpose of the VLAN and PVLANs are and should be able to figure out if they are needed. Is there too much broadcast traffic and you need to create a separate collision domain to cut down on that chatter? Then sure, but keep in mind the limitations of them as well. You are not increasing bandwidth or capacity to your network.


Objective 2.1 Configure Advanced Policies/Features and Verify Network Virtualization Implementation (Part 1)

Welcome once again. We are going to go over the following points under this objective.

  • Identify vSphere Distributed Switch capabilities
  • Create / Delete a vSphere Distributed Switch
  • Add / Remove ESXi hosts from a vSphere Distributed Switch
  • Add / Configure / Remove dvPort Groups
  • Add /Remove uplink adapters to dvUplink groups
  • Configure vSphere Distributed Switch general and dvPort group Settings
  • Create / Configure / Remove virtual adapters
  • Migrate virtual machines to/from a vSphere Distributed Switch
  • Configure LACP on Uplink Port Groups
  • Describe vDS Security Policies / Settings
  • Configure dvPort group blocking policies
  • Configure Load Balancing and failover policies
  • Configure VLAN / PVLAN settings
  • Configure traffic shaping policies
  • Enable TCP Segmentation Offload Support for a Virtual Machine
  • Enable Jumbo Frames support on appropriate components
  • Determine appropriate VLAN configuration for a vSphere implementation

So most of what we are going to go over is going to be pictures (yayy!!). Most of the above will stick better with you if you go over it a few times in the client. I know it does for me. Following along with my screenshots should give you a better and faster experience. SO without further ado,

Identify vSphere Distributed Switch Capabilities

So I will first bore you with the long winded explanation of what a vDS is. With a standard switch, both the management plane and the data plane exist together. You have to control the configuration on every host individually. The Distributed switch on the other hand will take the management plane and the data plane and separate them. What does this mean for you? It means that you can create the configuration just once, and push it down to every host that you have attached to that switch. The data plane still exists on each host. This piece is called a host proxy switch.

The Distributed Switch is made up of two abstractions that you use to create your configuration. These are:

  • Uplink Port Group: This is the physical connection on each host you create. You create the number of uplinks that you want for each host to have. For example. If you create 2 uplinks in this group, you can map 2 physical NICs on each host to the Distributed Switch. You can set failover and load balancing on this and have it apply to all the hosts.
  • Distributed Port Group: This is to provide your network connectivity to your VMs. You can configure teaming, load balancing, failover, VLAN, security, traffic shaping, and more on them. These will get pushed to every host that is part of the Distributed Switch

So as far as the abilities of a vDS vs a standard switch, here is a quick list of things that vDS can do.

  • Inbound Traffic Shaping= this allows you throttle bandwidth to the switch.
  • VM Port Blocking= You can block VM ports in case of viruses or troubleshooting
  • PVLANS= You can use these to further segregate your traffic and increase security
  • Load-Based Teaming= An additional load balancing that works off the amount of traffic a queue is sending
  • Central Management= As mentioned before you can create the config once and push it to all attached hosts
  • Per Port Policy Settings= You can override policies at a port level giving you fine grained control
  • Port State Monitoring= Each port can be monitored separate from other ports
  • LLDP= Supports Link Layer Discovery Protocol
  • Network IO Control= Allows you the ability to set priority on port groups and now VMs even reserving bandwidth per VM
  • NetFlow= Used for troubleshooting, grabs a configurable number of samples of network traffic for monitoring
  • LACP= The ability to aggregate links together into a single link (must be used in conjunction with the physical switch)
  • Backing and Restoring of Network Configuration= You can save and restore configurations
  • Port Mirroring= Also used for monitoring you can send all traffic from one port to another
  • Statistics move with the Machine= Even after vMotioning, your statistics can stay with the VM

So that is all the reasons why you would want to use a vDS. There are a lot of cool features and capabilities that is makes available and if you want to go even further, NSX is built on top of vDS as well. So it would behoove anyone that wants to get into Software Defined Networking with VMware, get cozy with vDS tech. Let’s go ahead and move onto the next point!

Create / Delete a vSphere Distributed Switch

So the easiest way to create a Distributed Switch is to do the following:

  1. From the Home Screen click on Networking in the Middle Pane, or you can also click on Networking in the Object Navigator
  2. Right Click on the Datacenter and this will be the menu that pops up
  3. Click on Distributed Switch and then click on New Distributed Switch
  4. You are now presented with the following Box
  5. Choose a name for your Distributed Switch
  6. You are now asked for which version of Distributed switch you want to create. Each of them correspond to the ESXi version. This also equals whether certain features will be available. For example on the version 6.0 Switch, NIOC v3 is available but wouldn’t be if you chose version 5.5
  7. The next screen that is presented to you, is going to present you with some options. Among these are Number of Uplinks, Enable or Disable Network IO Control, if you want to create a Default Port Group and what the name of it will be
  8. We already mentioned what each of those options are, so I won’t go over them again here. The next screen is just a recap of what you have already chosen
  9. When it is all done it will show up on your screen like this
  10. The Distributed Switch has two groups underneath it. The first is the Port Group, the second is the Uplink group
  11. To Delete the Distributed Switch, you just need to right click on the switch and click Delete. Pretty simple huh?

Add / Remove ESXi hosts from a vSphere Distributed Switch

In order to add or remove hosts to your Distributed Switch, follow these directions:

  1. Click on Networking from the Home Screen
  2. Right Click on your Distributed Switch and see the following menu
  3. Click on Add and Manage Hosts – You are now given this menu
  4. Click the action you wish to perform, and then click “Next”
  5. You can now either add or remove hosts as you need
  6. You also have the ability to migrate Virtual Machines and VMKernel adapters on the next screens
  7. The last screen you have that is relevant to this objective is “Analyzing Impact” and then “Ready to complete”
  8. Click Finish and you have now accomplished your task

Add / Configure / Remove dvPort Groups

So after you click on Networking from the Home screen (which you should be quite familiar with at this point) you are presented with your Distributed switch. If you chose to create a default port group when you created the dvSwitch, you should be presented with that on the networking screen underneath your vDS. For Example

Now if you need to configure that port group that you already have, you would just need to click on that port group and then click on manage. This will allow you all sorts of options. You can choose the one you want and then click on edit.

To add or remove a port group, you step one level back up.

To Add:

  1. Right click on your vDS and then click on Distributed Port Group or hover over it, and then you are presented with the following options
  2. Click on New Distributed Port Group and you are then asked to provide a name for it
  3. Click next and the next screen you are asked to configure the port group
  4. Next screen is your “Ready to complete” and click finish

To remove a port group:

  1. Right Click on the port group you wish to remove and then …….wait for it, delete it –That’s all there is to that

Add /Remove uplink adapters to dvUplink groups

There are a number of ways you can assign or remove adapters to a distributed switch. I think the easiest way is just right clicking on the Distributed Switch and then Add and Manage Hosts. You will need to assign hosts vmnics to an uplink. To do that do the following:

  1. Right Click on the Distributed switch and click on Add and Manage Hosts
  2. You will now need to select the host or hosts you want to assign to uplinks. You do that on this screen by clicking on the plus sign (+)
  3. Once it the host is selected it will look like the screen shot above
  4. Click on next and then you will be presented with this screen
  5. Manage Physical Adapters is the important thing we are looking for here – Go ahead and click next
  6. We now have the following screen
  7. Now we can click on one of the vmnics shown here to assign an uplink to a physical adapter
  8. Click on the uplink you are interested in assigning and then click Assign Uplink on the top- that will bring up this screen
  9. Choose the uplink you want to assign and click OK
  10. It will now show on your screen like this
  11. Go ahead through the remaining screens if there is anything else you need to change, do so
  12. Click Finish and you have now assigned the uplink.
  13. To remove, go through the above but instead of assigning uplink, choose the uplink and then “Unassign adapter”
  14. That’s all there is to it

Migrate virtual machines to / from a vSphere Distributed Switch

We are going to stay in the same place a while longer, but it is getting long so I have unilaterally decided to split this objective in two parts. The last point we are going to cover in this part is migrating virtual machines in and out of our Distributed Switch. We should be able to accomplish this without any packet drops or loss of connectivity on the part of the virtual machine. We are going to do this in the same place as before, under networking and then right click on our vDS. This time, choose “Migrate Virtual Machine Networking” though. This is the screen you will now be presented with.

From this point it’s relatively straightforward. You choose the network you are coming from, if any, and choose the destination network you want to go to. Then go ahead and click next. This is the next screen.

You can click on the VMs you want to move here. It will only let you do it if the virtual machine can be moved there. In this case all of my other virtual machines can’t be moved to there because they are on hosts that are not added to the vDS. Click on next and then finish and you are done.

Good Lord this took me a while to write up between case load and correcting 5th Grade homework (not mine of course). Next up on Part 2 we will go ahead and cover the rest of the points under this objective.