Objective 2.2 – Configure Network I/O Control

After the last objective this one should be a piece of cake. There are only 4 bullet points that we need to cover for this objective. They are:

  • Identify Network I/O Control Requirements
  • Identify Network I/O Control Capabilities
  • Enable / Disable Network I/O Control
  • Monitor Network I/O Control

So first off what is Network I/O control? NIOC is a tool that allows you to reserve and divide bandwidth up in a manner you see fit for the VMs you deem. You can choose to reserve a certain amount of bandwidth or a larger percentage of the network resources for an important VM for when there is contention. You can only do this on a vDS. With vSphere v6, we introduce a new version of NIOC. NIOC v3. This new version of Network I/O control allows us to reserve a specific amount of memory for an individual VM. It also still uses the old standbys of reserves, limits, and shares. This also works in conjunction with DRS and admission control and HA to be sure that wherever the VM is moved to, it is able to maintain those characteristics. So let’s get in a little deeper.

vSphere 6.0 is able to use both NIOC version 2 and version 3 at the same time. One of the big differences is in v2 you are setting up bandwidth for the VM at the physical adapter level. Version 3, on the other hand, allows you to go in deeper and set bandwidth allocation at the entire Distributed switch level. Version 2 is compatible with all versions from 5.1 to 6.0. Version 3 though, is only compatible with vSphere 6.0. You can upgrade a Distributed Switch to version 6.0 without upgrading NIOC to v3.

Identify NIOC Control Requirements

As mentioned before, you need at least vSphere 5.1 for NIOC v2 and vSphere 6.0 for v3 of NIOC. You also need a Distributed Switch. The rest of the things are expected. You will need a vCenter in order to manage the whole thing, and rather important you need to have a plan. You should know what you want to do with your traffic before you just rush headlong into it, so you don’t end up “redesigning” it 10 times.

Identify NIOC control Capabilities

Using NIOC you can control and shape traffic using shares, reservations, and limits. You can also specify based on certain types of traffic. Using the built in types of traffic, you can adjust network bandwidth and adjust priorities. The types of traffic are as follows:

  • Management
  • Fault Tolerance
  • iSCSI
  • NFS
  • Virtual SAN
  • vMotion
  • vSphere Replication
  • vSphere Data Protection Backup
  • Virtual Machine

So we keep mentioning shares, reservations, and limits. Let’s go and define these now so we know how to apply them.

  • Shares = this is a number from 1-100 to reflect the priority of the system traffic type, against the other types active on the same physical adapter. For example, you have three types of traffic, ISCSI, FT, and Replication. You assign ISCSI and FT 100 shares and Replication 50 shares. If the link is saturated it will give 40% of the link to ISCSI and 40% to FT and 20% to Replication.
  • Reservation = This is the guaranteed bandwidth on a single physical adapter measured in Mbps. This cannot exceed 75% of the bandwidth that the physical adapter with the smallest bandwidth can provide. For example, if you have 2x 10Gbps NICs and 1x 1Gbps NIC the max amount of bandwidth you can reserve is 750Mbps. If the network type doesn’t use all of this bandwidth, the host will free it up for other things to use it – This does not include allowing new VMs to be placed on that host however. This is just in case the system actually does need it for the reserved type.
  • Limit = this is the maximum amount of bandwidth in Mbps or Gbps, that a system traffic type can consume on a single physical adapter.

So what has changed? The following functionality has been removed if you upgrade from 2 to 3:

  • All user defined network resource pools including associations between them and existing port groups.
  • Existing Associations between ports and user defined network resource pools. Version 3 doesn’t support overriding the resource allocation at a port level
  • CoS tagging of the traffic that is associated with a network resource pool. NIOC v3 doesn’t support marking traffic with CoS tags. In v2 you could apply a QoS tag (which would apply a CoS tag) signifying that one type of traffic is more important than the others. If you keep your NIOC v2 on the distributed switch you can still apply this.

Also be aware that changing a distributed switch from NIOCv2 to NIOCv3 is disruptive. Your ports will go down.

Another new thing in NIOCv3 is being able to configure bandwidth for individual virtual machines. You can apply this using a Network resource pool and allocation on the physical adapter that carries the traffic for the virtual machine.

Bandwidth integrates tightly with admission control. A physical adapter must be able to supply minimum bandwidth to the VM network adapters. And the reservation for the new VM must be less than the free quota in the pool. If these conditions don’t match the VM won’t power on. Likewise, DRS will not move a VM unless it can satisfy the above. DRS will migrate a VM in order to satisfy bandwidth reservations in certain situations.

Enable / Disable Network I/O Control

This is simple enough to do:

  1. Navigate to the distributed switch via Networking
  2. Right Click on the distributed switch and click on Edit Settings
  3. From the Network I/O Control drop down menu, select Enable
  4. Click OK.

To disable do the above but click, wait for it……Disable.

Monitor Network I/O Control

There are many ways to monitor your networking. You can go as deep as getting packet captures, or you can go as light as just checking out the performance graphs in the web client. This is all up to you of course. I will list a few ways and what to expect from them here.

  1. Packet Capture – VMware has a packet capture tool included with it. This tool is pktcap-uw. You can use this to output to .pcap and .pcapng files. You can then use a tool like Wireshark to analyze the data.
  2. NetFlow – You can configure a distributed switch to send reports to a NetFlow collector. Version 5.1 and later support IPFIX (NetFlow version 10)
  3. Port Mirroring – This allows you to take one port and send all the data that flows across it to another. This also requires a distributed switch