SonicOS 7 NSv Getting Started Guide for ESXi

Configuring SR-IOV

For high performance requirements in a virtual environment, VMware ESXi provides two options for exposing the HW level NIC as a PCI device directly into the virtual machine Guest OS. The first option is the "pass-through" mode. The other option is "SR-IOV." For "pass-through" mode, the HW NIC is directly exposed as a PCI device into the virtual machine Guest OS. We need to add a "PCI device" in the virtual machine configuration settings. The "pass-through" mode NIC can only be used by one virtual machine and can in no way to share this HW NIC with other virtual machines on the same Host. For the "SR-IOV" mode, if the NIC supports this mode, it can expose the "Virtual Function (VF)" virtualized PCI devices into the Guest virtual machine as Network Adapters. So multiple virtual machines can use different VF NICs from the same HW PF (Physical Function) NIC.

Prerequisites

This document (particularly the screenshots), is based on a Dell R740 server with an Intel X520 NIC. For other servers and NICs, the settings might be different.

  • Get the iDrac access to your host server (for enabling SR-IOV settings in BIOS).

    You might need to use old IE as the iDrac virtual console as a JAVA SE applet, as you might not be able to pop-out on some modern browsers.

  • Get the vCenter access to configure the host server and virtual machines on the server.

Procedures

To enable SR-IOV in BIOS

  1. Go to System BIOS Settings > Integrated Devices.

  2. Enable the SR-IOV Global Enable option.

    If the NIC has some separate SR-IOV settings, you might also need to check them in the BIOS settings. For example, for the Intel 710 NICs, you need to enable the SR-IOV for each NIC in BIOS settings.

To enable SR-IOV in VMware Host NIC settings

  1. Go to the Host's Configuration > Networking > Physical adapters, find your NIC that supports SR-IOV, click Edit.

  2. In the SR-IOV section, set the Status to Enabled and set the value of Number of virtual functions to some value that is larger than 0.

    There could be some maximum VF number for different NICs, you should check the NIC specifications or BIOS settings for this maximum number.

  3. After configuring the SR-IOV settings for all the NICs you want to use, you need to reboot the "Host" and then check the SR-IOV status of those NICs to make sure they are all available.

Now that the Host settings are established, configure the NSv virtual machine to add the SR-IOV interfaces.

If the vCenter GUI reports errors and does not function as expected, there is a CLI command in ESXi SSH that can do the same for configuring the SR-IOV VF number:

  1. Use esxcli network nic list to locate the driver names of your NICs.

  2. Use esxcfg-module ixgben -s max_vfs=4,4,4,4. The "ixgben" is the driver name in this case, and the "4,4,4,4" means configure all four ports with four maximum VF number.

To add SR-IOV Network Adapters into your virtual machine

  1. Set the "VM compatibility" of your NSv virtual machine (right-click the virtual machine and see the "Compatibility" option). Note, this is the very "key" step to be able to add the SR-IOV network adapter in your virtual machine. See the "Prerequisites" in https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.networking.doc/GUID-898A3D66-9415-4854-8413-B40F2CB6FF8D.html.

  2. According to VMware's guide, the compatibility should be "ESXi 5.5 or later." It is suggested to use the latest version that the Host supports. So select the default "ESXi 6.7 Update 2 and later" for this host.

  3. You might like to add new "virtual networking" to the vSwitches with your physical adapters.

  4. Make sure you select the vSwitch of your SR-IOV physical adapter.

  5. To make the multiple SR-IOV VF can be used by multiple different virtual machines, set different VLAN IDs for different networks. Then you can select different networks for different virtual machines.

To configure the virtual machine to add the SR-IOV Network Adapters

  1. Open the Edit Settings of your NSv virtual machine. Click the ADD NEW DEVICE and Select Network Adapter.

  2. Edit your newly added Network Adapter by: changing the Adapter Type to SR-IOV passthrough and select the Physical Function to the physical NIC and select your virtual Network.

    You can add multiple SR-IOV adapters to the same virtual machine if your total NIC number does not exceed the "maximum physical interfaces supported in NSv." Now you are done with all the SR-IOV settings in VMware. You might need to configure your real physical switch that connected to the physical function NIC port to add the VLANs for supporting different VF sending traffics with different VLAN ID.

  3. Enable the Reserve all guest memory (All locked) option in the virtual machine Memory part.

    Otherwise, the virtual machine with SR-IOV devices cannot boot up because of a memory error.

Performance Enhancement Configurations

From the images in the previous sections on configuration, we use the Intel 82599 (or X520) NIC as an example. But because of the limitations with these NICs, the RSS configurations can only be configured by the PF driver side. And after some testing and investigations, both the "ixgben" and "ixgbe" drivers from VMware cannot fully enable the multi-queue RSS feature in NSv's virtual machine side. So all packets goes to only one RX queue for each NIC port. This could result in some multicore contentions on the RX side (might make more CPU time visible on the ODP scheduler when doing the performance profiling).

To achieve the best performance for NSv, make sure the RSS feature on the VF side inside the NSv works as expected (multiple RX queue can all evenly get packets when we have multiple traffic flows running through NSv). Currently, only the i40e (Intel 7xx NICs) driver works as expected and gets the best performance.

Replace the default VMWARE Native driver (ends with "n") with original driver

Before going into the steps for enabling RSS on the PF driver side, enable the original Intel NIC drivers (such as "i40e" for Intel 7xx NICs) and disable the native VMware drivers (such as the "i40en" for Intel 7xx NICs).

The main reason for replacing the driver is that the "native" driver does NOT work with DPDK's VF driver and always causes SonicOS to fail at the early stages of configuring VF drivers.

You can use the following commands to check which drivers are in use.

[root@ESXi-10D7D100D252:~] esxcfg-nics -l | grep i40e
vmnic0 0000:18:00.0 i40en Up 10000Mbps Full 24:6e:96:d1:24:7c 1500 Intel
Corporation Ethernet Controller X710 for 10GbE SFP+
vmnic1 0000:18:00.1 i40en Up 10000Mbps Full 24:6e:96:d1:24:7e 1500 Intel
Corporation Ethernet Controller X710 for 10GbE SFP+
vmnic4 0000:3b:00.0 i40en Up 10000Mbps Full f8:f2:1e:21:98:60 1500 Intel
Corporation Ethernet Controller X710 for 10GbE SFP+
vmnic5 0000:3b:00.1 i40en Up 10000Mbps Full f8:f2:1e:21:98:62 1500 Intel
Corporation Ethernet Controller X710 for 10GbE SFP+

If the third column says "i40en," then it means you need to replace it with "i40e."

Then check if the "i40e" drivers are available on your system. If not, you might need to search for and download them from VMware's website.

[root@ESXi-10D7D100D252:~] esxcli system module list | grep i40e

i40en_ens true true
i40e true true
i40en true true

As you can see, we have both "i40e" and "i40en" drivers and all enabled and loaded by default. Now we need to disable the "i40en" and make sure enable the "i40e" driver module.

esxcli system module set -e=true -m=i40e

esxcli system module set -e=false -m=i40en

Reboot the Host server to apply this change. After the system boots up, you can check with "esxcfg-nics -l | grep i40e" to verify all those X710 NICs are using the "i40e" driver module instead of the "i40en."

Set the RSS and max_vfs parameters for i40e driver

There are some other parameters that can be set for the "i40e" driver. Use the following commands to see a list of these parameters and brief descriptions.

[root@ESXi-10D7D100D252:~] esxcli system module parameters list --module i40e
Name Type Value Description
RSS array of int 4,4,4,4 Number of Receive-Side Scaling Descriptor Queues: 0 = disable/default, 1-4 = enable (number of cpus)
VMDQ array of int Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default = 8)
debug int Debug level (0=none,...,16=all)
heap_initial int Initial heap size allocated for the driver.
heap_max int Maximum attainable heap size for the driver.
max_vfs array of int 4,4,4,4 Number of Virtual Functions: 0 = disable (default), 1-128 = enable this many virtual machines
skb_mpool_initial int Driver's minimum private socket buffer memory pool size.
skb_mpool_max int Maximum attainable private socket buffer memory pool size for the driver.

There are only two parameters that we need to set for enabling SR-IOV and RSS features: "max_vfs" and "RSS." As the maximum RSS queues are four for current i40e and we set the maximum number of virtual machines to four as example, then you can use the following command to set the values.

esxcli system module parameters set --module i40e -p "RSS=4,4,4,4 max_vfs=4,4,4,4"

Please note that we set four numbers for both parameters. This is because we have four NICs in "esxcfg-nics" results and we would like to enable these features for all these four NICs.

After this command, then you need to reboot the Host again to apply these changes.

After the system boots up, you can change your NSv's NIC settings to setup the SR-IOV interfaces upon the X710 physical NIC and do the performance testing.

Note on Test Methods

  • Always use multiple flows to test the performance

    Because of our multicore processing design, always use multiple traffic flow when testing the throughput.

    And for these flows, we should make sure only one of the four tuples (srcIP/dstIP/srcPort/dstPort) changes for each flow. This can make sure the RSS hash and our connection tag hash work perfectly to distribute the flows to different cores.

  • Disable the Use 4 Byte Signature feature in IXIA

    In IxNetwork RFC2544 test settings, the following configuration could affect the result.

This Use 4 Byte Signature option can only be used in testing the packets with a 64 bytes size. Otherwise, disable this option.

Was This Article Helpful?

Help us to improve our support portal

Techdocs Article Helpful form

  • Hidden
  • Hidden

Techdocs Article NOT Helpful form

  • Still can't find what you're looking for? Try our knowledge base or ask our community for more help.
  • Hidden
  • Hidden