Objective 3.1: Manage vSphere Storage Virtualization

Wait, wait wait…. Where did objective 2.2 go? I know… I didn’t include it since I have already gone over in previous objectives all the questions asked on it. So moving on to Storage.

So we are going to cover the following objective points.

  • Identify storage adapters and devices
  • Identify storage naming conventions
  • Identify hardware/dependent hardware/software iSCSI initiator requirements
  • Compare and contrast array thin provisioning and virtual disk thin provisioning
  • Describe zoning and LUN masking practices
  • Scan/Rescan storage
  • Configure FC/iSCSI LUNs as ESXi boot devices
  • Create an NFS share for use with vSphere
  • Enable/Configure/Disable vCenter Server storage filters
  • Configure/Edit hardware/dependent hardware initiators
  • Enable/Disable software iSCSI initiator
  • Configure/Edit software iSCSI initiator settings
  • Configure iSCSI port binding
  • Enable/Configure/Disable iSCSI CHAP
  • Determine use case for hardware/dependent hardware/software iSCSI initiator
  • Determine use case for and configure array thin provisioning

So let’s get started

Identify storage adapters and devices

Identifying storage adapters is easy. You have your own list to refer to, the Storage Devices view. To navigate to it do the following:

  1. Browse to the Host in the navigation pane
  2. Click on the Manage tab and click Storage
  3. Click Storage Adapters
    This is what you will see

As you can see, identification is relatively easy. Each adapter is assigned a VMHBAXX address. They are under larger categories and also you are given the description of the name of the hardware, i.e. Broadcom ISCSI adapter. You can find out a number of details about the device by looking down below and going through the tabs. As you can see, one of the tabs is devices that is under that particular controller. Which brings us to our next item, storage devices.

Storage Devices is one more selection down from Storage Adapters, so naturally you would navigate the same way to it and then just click on Storage Devices instead of adapters. And now that we are seeing those, now it’s time to move on to naming conventions to understand why they are named how they are.

Identify Storage Naming Conventions

You are going to have multiple names for each type of storage and device. Depending on the type of storage, ESXi will use a different convention or method to name each device. The first type is SCSI Inquiry identifiers. The host uses a command to query the device and uses the response to generate a unique name. They are unique, persistent, and will have one of the following formats:

  • Naa.number = naa stands for Network Address Authority and then it is followed by a bunch of hex number that identify vendor, device, and lun
  • T10.number = T10 is a technical committee tasked with SCSI storage interfaces and their standards (also plenty of other disk standards as well)
  • Eui.number – Stands for Extended Unique Identifier.

You also have the path based identifier. This is created if the device doesn’t provide the needed information to create the above identifiers. It will look like the following: mpx.vmhbaxx:C0:T0:L0 – this can be used just the same as the above identifiers. The C is for the channelr, the T is for target, and L is for the LUN. It should be noted that this identifier type is neither unique nor persistent and could change every reboot.

There is also a legacy identifier that is created. This is in the format vml.number

You can see these identifiers on the pages mentioned above and also at the command line by typing the following.

  • “esxcli storage core device list”

Identify hardware/dependent hardware/software iSCSI initiator requirements

You can use three types of ISCSI initiators in VMware. These are independent hardware, dependent hardware, and software initiators. The differences are as follows

  • Software ISCSI = this is code built into the VMkernel. It allows your host to connect to ISCSI targets without having special hardware. You can use a standard NIC for this. This requires a VMkernel adapter. This is also able to use all CHAP levels
  • Dependent Hardware ISCSI initiator – this device still depends on VMware for networking, ISCSI configuration, and management interfaces. This is basically ISCSI offloading. An example of this is Broadcom 5709 NIC. This requires a VMkernel adapter (this will show up as a NIC and a Storage adapter). This is able to use all CHAP levels
  • Independent Hardware ISCSI initiator – this device implements its own networking, ISCSI config and management interfaces. An example of this is the Qlogic QLA4052 adapter. This does not require a VMkernel adapter (this will show up as a Storage Controller). This is only able to use unidirectional CHAP and use unidirectional unless prohibited by target CHAP levels

Compare and contrast array thin provisioning and virtual disk thin provisioning

You have two types of thin provisioning. The biggest difference between these are where the provisioning is going on at. The array can thinly provision the LUN. In this case it will present the total logical size to the ESXi host which may be more than the real physical capacity. If this is the case then there is really no way for your Esxi host to know if you are running out of space. This obviously can be a problem. Because of this, Storage APIs – Array Integration was created. Using this feature and a SAN that supports it, your hosts are aware of the underlying storage and are able to tell how your LUNs are configured. The requirements for this are simply ESXi 5.0 or later and a SAN that supports Storage APIs for VMware.

Virtual Disk Thin Provisioning is the same concept but done for the Virtual Hard Disk of the Virtual Machine. You are creating a disk and telling it that it has more space than it actually might. Due to this, you will need to monitor the status of that disk in case your VM Operating System starts trying to use that space.

Describe zoning and LUN masking practices

Zoning is a Fiber Channel Concept meant to restrict servers from seeing storage arrays. Zone will define which HBA’s, or Cards in the Server, can connect to which Storage Processors on the SAN. LUN Masking on the other hand will only allow certain hosts to see certain LUNs.

With ESXi hosts you want to use single initiator zoning or single initiator-single target zoning. The latter is preferred. This can help prevent misconfigurations and access problems.

Rescan Storage

This one is pretty simple. In order to take count of new storage or see changes on existing storage, you may want to rescan your storage. You can do two different operations from the gui client: 1) Scan for New Storage Devices, 2) Scan for new VMFS Volumes. The 1st will take longer that the second. You can also rescan for storage at the command line by performing the following command at CLI

esxcli storage core adapter rescan –all – this will rescan for new Storage Devices
vmkfstools –V
– This will scan for new VMFS volumes

I have included a picture of the webclient with the button circled in red that is to rescan.

Configure FC/iSCSI LUNs as ESXi boot devices

ESXi supports booting from Fiber Channel or FCoE LUNs as well as ISCSI. First we will go over Fiber Channel.

Why would you want to do this first off? There are a number of reasons. Among them you remove the need to have storage included in the servers. This makes them cheaper and less prone to failure as hard drives are the most likely component to fail. It is also easier to replace the servers. One server could die and you can drop a new one in its place, change zoning and away you go. You also access the boot volume through multiple paths, whereas if it was local, you would generally have one cable to go through and if that fails, you have no backup.

You do have to be aware of requirements though. The biggest one is to have a separate boot LUN for each server. You can’t use one for all servers. You also can’t multipath to an active-passive array. Ok so how do you do it? On a FC setup:

  1. Configure the array zoning and also create the LUNs and assign them to the proper servers
  2. Then using the FC card’s BIOS you will need to point the card to the LUN and add in any CHAP credentials needed
  3. Boot to install media and install to the LUN

On ISCSI, you have a little different setup depending on what kind of initiator you are using. If you are using an Independent Hardware ISCSI initiator, you will need to go into the card’s BIOS to configure booting from the SAN. Otherwise with a Software or Dependent, you will need to use a network adapter that supports iBFT. Good recommendations from VMware include

  1. Follow Storage Vendor Recommendations (yes I got a sensible chuckle out of that too)
  2. Use Static IPs to reduce the chances of DHCP conflicts
  3. Use different LUNs for VMFS and boot partitions
  4. Configure proper ACLs. Make sure the only machine able to see the boot LUN is that machine
  5. Configure a diagnostic partition – with independent you can set this up on the boot LUN. If iBFT, you cannot

Create an NFS share for use with vSphere

Back in 5.5 and before you were restricted to using NFS v3. Starting with vSphere 6, you can now use NFS 4.1 VMware has some recommendations for you about this as well.

  1. Make sure the NFS servers you use are listed in the HCL
  2. Follow recommendations of your storage vendor
  3. You can export a share as v3 or v4.1 but you can’t do both
  4. Ensure it’s exported using NFS over TCP/IP
  5. Ensure you have root access to the volume
  6. If you are exporting a read-only share, make sure it is consistent. Export it as RO and make sure when you add it to the ESXi host, you add it as Read Only.

To create a share do the following:

  1. On each host that is going to access the storage, you will need to create a VMkernel Network port for NFS traffic
  2. If you are going to use Kerberos authentication, make sure your host is setup for it
  3. In the Web Client navigator, select vCenter Inventory Lists and then Datastores
  4. Click the Create a New Datastore icon
  5. Select Placement for the datastore
  6. Type the datastore name
  7. Select NFS as the datastore type
  8. Specify an NFS version (3 or 4.1)
  9. Type the server name or IP address and mount point folder name (or multiple IP’s if v4.1)
  10. Select Mount NFS read only – if you are exporting it that way
  11. You can select which hosts that will mount it
  12. Click Finish




Enable/Configure/Disable vCenter Server storage filters

When you look at your storage and when you add more or do other like operations, VMware by default has a set of storage filters it employs. Why? Well there are four filters. The filters and explanation for them are as follows

  1. config.vpxd.filter.vmfsFilter = this filters out LUNs that are already used by a VMFS datastore on any host managed by a vCenter server. This is to keep you from reformatting them by mistake
  2. config.vpxd.filter.rdmFilter = this filters out LUNs already referenced by a RDM on any host managed by a vCenter server. Again this is protection so that you don’t reformat by mistake
  3. config.vpxd.filter.SameHostAndTransportsFilter = this filters out LUNs ineligible for use due to storage type or host incompatibility. For instance you can’t use Fiber Channel Extent and attach it to an ISCSI LUN
  4. config.vpxd.filter.hostRescanFilter = this prompts you to rescan anytime you perform datastore management operations. This tries to make sure you maintain a consistent view of your storage

And in order to turn them off you will need to do it on the vCenter (makes sense huh?) You will need to navigate to the vCenter object and then to the Manage tab.

You will then need to add these settings since they are not in there for you to change willy nilly by default. So click on the Edit and then type in the appropriate filter and type in false for the value. Like so

Configure/Edit hardware/dependent hardware initiators

A dependent hardware ISCSI still uses VMware networking and ISCSI configuration and management interfaces provided by VMware. This device will present two devices to VMware. Both a NIC and an ISCSI engine. The ISCSI will show up under Storage Adapters (vmhba). In order for it to work though, you still need to create a VMKernel port for it and to a physical network port. Here is a picture of how it looks underneath your storage adapters for a host.

There are a few things to be aware of while using a dependent initiator.

  1. When you are using the TCP Offload engine, you may not show activity or little activity on the NIC associated with the adapter. This is due to the host passing all the ISCSI to the engine and it bypasses the regular network stack
  2. If using the TCP offload engine, it has to reassemble the packets in hardware and there is a finite amount of buffer space. You should enable flow control in order to be able to better manage the traffic (pause frames anyone?)
  3. Dependent adapters will support IPv4 and IPv6

To setup and configure them you will need to do the following:

  1. You can change the alias or the IQN name if you want by going to the host and Manage > Storage > Storage Adapters and highlightling the adapter and then clicking Edit
  2. I am assuming you have already created a VMKernel port by this point. The next thing to do would be to bind the card to the VMKernel
  3. You do this by clicking on the ISCSI adapter in the list and then clicking on Network Port Binding below
  4. Now click on the add icon to associate the NIC and it will give you this window
  5. Click on the VMkernel you created and click Ok
  6. Go back to the Targets section now and add your ISCSI target. Then rescan and voila.

Enable/Disable software iSCSI initiator

For a lot of reasons you might want to use the software ISCSI initiator instead of the hardware one. For instance you might want to maintain an easier configuration where the NIC doesn’t matter. Or you just might not have the availability of dependent cards. Either way, you can use the software initiator to work your bits. “But Wait” you say, “Software will be much slower than hardware!” You would be correct perhaps with older revisions of ESX. However the software initiator is so fast at this point that there is not much difference between them. By default, however, the software initiator is not installed. You will need to install it manually. To do this you will need to go to the same place we were before, under Storage Adapters. Then while there, click on the add icon (+) and Click on Add Software ISCSI adapter. Once you do that, it will show up in the adapters and allow you to add targets and bind NICs to it just like the hardware ISCSI will. To disable it, just click on it and then under properties, down below, click on Disable.

Configure/Edit software iSCSI initiator settings
Configure iSCSI port binding
Enable/Configure/Disable iSCSI CHAP

I’m going to cover all these at the same time since they flow together pretty well. To configure your ISCSI initiator settings you would navigate to the ISCSI adapter. Once there all your options are down at the bottom under the Adapter Details. If you want to edit one, you would click on the tab that has the settings and click on Edit. Your Network Port Bindings is under there and is configured the same as we did before. The Targets is there as well. Finally CHAP is something we haven’t talked about yet. But it is there underneath the Properties tab under Authentication. Depending on what type of adapter you have, will change the type of CHAP available to you. Click on Edit under the Authentication section and you will get this window

As you probably noticed this is done on a per storage adapter level. So you can change it for different initiators. Keep in mind this is all sent via plain text so the security is not really that great. This is better used to mask LUNs off from certain hosts.

We have already pretty much covered why we would use certain initiators over others and also thin provisioning. So I will leave those be and sign off this post. (Which keep getting more and more loquacious)