Wrapping up this objective we are going to cover the following:
Configure LACP on Uplink Port Groups
You most likely already know what LACP is. For those of you that don’t, however we will go over a brief definition of it. LACP stands for Link Aggregation Control Protocol and is part of the 802.3ad IEEE specification. This protocol allows you to take multiple links and bind them into a single link. “But wait!” you might say, isn’t that basically what load balancing does? Nope. The links used inside load balancing are still separate links. Meaning at any one time, your data stream will never equal more than the bandwidth of a single link. So if you had 2x1Gb Connections, you are only ever using 1Gb of speed for any data stream. LACP also known in some groups as EtherChannel or Bonding, or even occasionally Trunking (I don’t like using Trunking because it can mean a number of things), on the other hand, will try to use all the links for a single stream. Distributing it over all the links in the group.
LACP is only supported on a vDS. And you must configure your uplink Port Group a specific way. There are also a couple of other restrictions. LACP does not support Port Mirroring, does not exist in Host Profiles, and you can’t set one up between two nested hosts. One also important thing to note, was although under ESXi 5.1 you can only support LACP with IP Hash load balancing, starting with 5.5, all of the load balancing methods are supported. Now without further ado, let’s see how we create one.
Describe vDS Security Policies / Settings
vDS Security Policies were already covered in a previous blog post, so I won’t go too far in depth of them here. But a basic listing is as follows:
Configure dvPort group blocking policies
DvPort Group blocking. This is the ability to go ahead and shut down all the ports on a Port Group. You can also block all traffic from a single port on a dvPort Group. Why would you want to do this? In my opinion this is meant if there is possibly a VM that has a virus on it and you need to shut it down quick, or you can use it for troubleshooting, or you can use it for testing purposes. This is obviously going to disrupt network flow to whichever you decide to apply this policy. I won’t go too far into it since it’s not really a difficult concept and it can be done at a port group level if you right click on the port group you are looking to block then click on Settings. Next go to Miscellaneous and click on the drop down for Block All Ports and choose ‘Yes’ and then click Ok. Here is your picture.
You can also navigate to the Port right click on that and go to settings and block that port.
Configure Load Balancing and failover policies
On the load balancing menu, we can choose from the following wonderful items:
These can be accessed at a switch level or at a port group level (on a vDS in the web client you will need to configure on the port group level) you can access the settings on a Standard Switch and choose your Load balancing policy there or on the port group – Here is the needed picture on the vDS side
For Failover Policies you have the following options, Network Failure Detection, Notify Switches, Failback, and you also have the ability to choose your Active, Standby, and Unused uplinks. We will go over each of those to get a good description of what they are.
Configure VLAN / PVLAN settings
VLANs and PVLANs are both tools to do the same thing. Segregate your network. Using them, you can separate your networks into multiple pieces just like partitioning your hard drive. And just like chopping up your hard drive up into multiple pieces, you run into the same limitations. You have not increased the number or speed of the underlying structure. You still have one physical hard drive to serve your I/O needs. Likewise with your VLAN you have sectioned your network into multiple pieces, but you have not increased your bandwidth or made it any faster. So make sure when you use these tools that you are using them for the proper purpose. VLANs can be applied at multiple places. The three places are as follows:
Ok so that covers VLANs, what are PVLANs? PVLANs are an extension of VLANs. They have to have a special physical switch that can support them, and can only reside on a vDS. They are used to further segment the network. Seems a bit redundant, but hang in with me as I go into the types and it will perhaps explain a bit more why you might want to use them.
Alright we got the wall of text out of the way. So how do we configure these? Well to configure VLANs on a standard switch we will need to go to the host that we want to change the networking for and then go to manage and networking. Then we will click on the pencil to edit the port group we want to add VLANs to. When we do we get this screen here:
We can type directly in the VLAN ID box and away we go. We can also set it to trunk (4095) if we are going to give the VLAN duty to the virtual machine NIC.
For Distributed switches it is roughly the same way just where we need to access the port group is a little different. We need to navigate to the Distributed switch and then click on the port group that we need to tag. Then click on manage, then settings and click edit. This Window will now manifest itself:
And since we are already here, you can also set the PVLANs here as well. By clicking on the VLAN type you can get the additional types. The switch must be configured on the Distributed switch first however. You do that by right clicking on the Distributed switch and then click on Edit Private VLAN settings. Here is that picture:
As you can see above, you can choose the VLAN ID and also the IDs of the secondary PVLANs. You can also choose what type go to which ID. After this is done, you can now associate the port group with the VLAN, here is that picture:
VLAN ID is now able to choose which PVLAN you are associating with it.
Configure traffic shaping policies
Traffic shaping policies is a fancy way of saying we are going to direct traffic to do our bidding. Depending on the switch you are working on, you have the ability of working with Ingress (incoming) or Ingress and Egress (outgoing) traffic. I shouldn’t need to tell you at this point which has the expanded capability.
On Standard switches you can work with it on the switch, or the port group. To get there you would just need to edit the settings for either. Obligatory picture inserted here:
It gives you the options for Average Bandwidth, Peak Bandwidth, and Burst Size. I will first give you the really dry definitions of each and then try to simplify it a little bit.
So to put this in other words, you are restricting a port to your average bandwidth unless you have a peak bandwidth that is higher. If you do, the virtual machine is allowed to hit the peak bandwidth for the amount of KB allowed in the burst size. And now for the picture of the dVS port group traffic shaping:
You can find this by navigating to the dVS under Networking and then choosing the dvPort Group you are interested in applying this to and editing the settings. You notice on the above that you have both incoming and outgoing traffic.
Enable TCP Segmentation Offload support for a virtual machine
What is TCP Segmentation Offloading? In normal TCP operation on your machine, the CPU will take large data chunks and split them into TCP segments. This is one more job for the CPU to do and can add up over time. If TSO is enabled on the host and transmission path however, this job is handed off to the NIC to do and frees CPU cycles up to work on more important things. Obviously your NICs will need to support this technology. By default, VMware is configured to have this on if your NICs support it. Occasionally you may want to toggle it. In order to do that you will need to go to Manage for the host. Then click on Advanced System Settings and go to the Net.UseHWTSO parameter to 1 to enable or 0 to disable.
Enable Jumbo Frames support on appropriate component
Very quick definition of Jumbo Frames, it is a MTU larger than 1500. Yes that means that 1501 MTU is technically a jumbo frame. Why do we want Jumbo frames? When we increase the size of each frame we send, the CPU has less of them to deal with for the same amount of data. This of course allows our CPU to give more attention to our VMs. This is a good thing.
There are a few things we need to make sure of when we enable Jumbo frames. We can’t just have them enabled on one particular device. We need it enabled end-to-end on every device or it won’t work. You will also run into problems with things like WAN accelerators and other such devices because a lot of them like to fragment the packets. You will also need to know what the particular settings are for your network devices and storage devices. Occasionally some of them will need you to have a larger frame set than you are pushing through ESXi in order to accommodate the frame. For Example in general you would set your MTU on a Force10 to 12k Frame size and on your ESXi host you would set it to 9k frame size. This would be to accommodate overhead on the frames.
Ok so where can we enable them and where in the ESXi host? We can actually enable them in three places. We can enable on a Switch (Standard or Distributed), vmkernel adapter, and on virtual machine’s NIC. So let’s start posting pictures
1. Navigate to the Networking and click on the Distributed Switch you want to modify
2. Right Click on the switch and click on settings and then Edit Settings
3. Click on Advanced and change MTU to size desired
1. From the Home screen click on Hosts and Clusters and then navigate to the host you want to modify
2. Click on the Manage and then Networking sub-tab. Then click on the Virtual switches on the left
3. In the middle, click on the switch you wish to modify and then click on the pencil for it
4. Change MTU as desired
1. From where we were just a minute ago, take a small step down to VMkernel adapters (Hosts&Clusters ->Host->Manage->Networking)
2. Click on the VMkernel adapter and then click on the pencil
3. On the screen that pops up, choose NIC settings and change to the desired MTU
1. You will need to make sure that you have a VMXNet2 or 3 adapter set on the VM.
2. You will need to set the MTU inside the VM.
3. If you currently have an e1000 adapter you will need to copy the MAC and create a new VMXNet3 and copy the MAC to that and disable the old.
Determine appropriate VLAN configuration for a vSphere implementation
A lot of this is going to be based on an existing configuration (if you have one) and your network administrator will need to be involved. You know what the purpose of the VLAN and PVLANs are and should be able to figure out if they are needed. Is there too much broadcast traffic and you need to create a separate collision domain to cut down on that chatter? Then sure, but keep in mind the limitations of them as well. You are not increasing bandwidth or capacity to your network.