SlideShare uma empresa Scribd logo
1 de 52
Baixar para ler offline
VMware ESXi - Intel and Qlogic NIC throughput difference
ABSTRACT:
We are observing different network throughput on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC.
ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC
adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi
VMkernel is depicted and documented in this paper which may or may not be the root cause of the
observed problem. The key objective of this document is to clearly document and collect NIC
information on two specific Network Adapters and do a comparison to find the difference or at least
root cause hypothesis for further troubleshooting.
Date: 2020-10-28
Author: David Pasek, VMware TAM, dpasek@vmware.com
RESOURCES:
Author Resource Name / Resource Locator
Niels Hagoort
Frank Denneman
VMware vSphere 6.5 Host Resource Deep Dive
Book
Niels Hagoort Virtual Machine Tx threads explained
https://nielshagoort.com/2017/07/18/virtual-machine-tx-threads-explained/
Lenin Singaravelu
Haoqiang Zheng
VMworld 2013 : Extreme Performance Series : Network Speed Ahead
https://www.slideshare.net/VMworld/vmworld-2013-extreme-performance-
series-network-speed-ahead
Rishi Mehta Leveraging NIC Technology to Improve Network Performance in VMware
vSphere
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vmwar
e-vsphere-pnics-performance-white-paper.pdf
VMware KB RSS and multiqueue support in Linux driver for VMXNET3
https://kb.vmware.com/s/article/2020567
Niels Hagoort
Frank Denneman
VMworld 2016: vSphere 6.x Host Resource Deep Dive
https://www.slideshare.net/VMworld/vmworld-2016-vsphere-6x-host-
resource-deep-dive
Dmitry
Mishchenko
Enable RSS in the ESXi host
https://austit.com/faq/323-enable-pnic-rss
Rishi Mehta Network Improvements in vSphere 6 Boost Performance for 40G NICs
https://blogs.vmware.com/performance/2015/04/network-improvements-
vsphere-6-boost-performance-40g-nics.html
Marvell Marvell® Converged Network Adapters 41000 Series Adapters
https://www.marvell.com/content/dam/marvell/en/public-
collateral/dell/dell-marvell-converged-network-adapter-41xxx-user-guide.pdf
VMware Performance Best Practices for VMware vSphere 6.5
https://learnvmware.online/wp-
content/uploads/2018/02/perf_best_practices_vsphere65.pdf
VMware KB Intermittent network issues during vSAN/vMotion traffic with qedentv driver
(68147) - https://kb.vmware.com/s/article/68147?lang=en_US
All credits for detail information about ESXi network stack go to Niels Hagoort and his book
VMware vSphere 6.5 Host Resource Deep Dive.
Problem description
When we run VMs on ESXi 6.7 host with NIC Intel(R) Ethernet Controller X710 for 10GbE SFP+ we see
the total network throughput 300 MB/s (~50% transmit, ~50% receive) which is approximately ~1.5
Gbps Tx / ~1.5 Gbps Rx
When we run VMs on ESXi 6.7 host with NIC Qlogic FastLinQ QL41xxx 1/10/25 GbE Ethernet Adapter
we see the total network throughput 100 MB/s (~50% transmit, ~50% receive) which is
approximately ~0.5 Gbps Tx / ~0.5 Gbps Rx
Following is screenshot provided by application owner of their application latency monitoring, the
spike at about 1:18 – 1:20PM when we were testing moving (via vMotion) a single VM to a host with
qlogic network card where no other VM was running and then migrating it back.
The latency increased from typical 20 ms to almost 2000 ms make it application unresponsive and
useless.
The performance problem is observed only on OpenShift environment where majority of network
traffic goes through internal OpenShift (K8s) Load Balancer. OpenShift Container Platform provides
methods for communicating from outside the cluster with services running in the cluster. This
method uses a load balancer. This means that network traffic is to/from single VM, single vNIC and
single MAC address. The architecture is visualized in figure below.
Now, the question is why the same VM running on the
identical server hardware except NIC, is able to easily
handle 300 MB/s (~ 3 Gbps) on Intel NIC and only 100
MB/s (~ 1 GBps) on Qlogic NIC?
NIC Capabilities Comparison - Intel versus QLogic
In the table below, we have the comparison of key network capabilities and differences between
ESXi hosts having Intel and QLogic network adapter.
Capability Intel X710 QLogic QL41xxx Diff
Driver type (INBOX/ASYNC) ASYNC ASYNC
TSO/LSO Enabled, supported Enabled, supported
LRO Enabled Enabled
CSO On On
# of VMDq (hw queues) Up to 256 Up to 208
Net Queue Count / vmnic 8 Rx Netqueues
8 Tx Netqueues
20 Rx Netqueues
20 Tx Netqueues
More software queues
observed in Qlogic. But
they are created
dynamically based on
demand.
Net Filter Classes MacOnly = true
VlanOnly = true
VlanMac = true
Vxlan = true
Geneve = true
GRE = false
MacOnly = true
VlanOnly = false
VlanMac = false
Vxlan = true
Geneve = true
GRE = false
QLogic does not support
VlanOnly and VlanMac, but
it should not be big deal.
Net Queue load balancer
settings *
RxQPair = SA
RxQNoFeature = ND
PreEmptibleQ = UA
RxQLatency = UA
RxDynamicLB = NA
DynamicQPool = SA
MacLearnLB = NA
RSS = UA
LRO = UA
GeneveOAM = UA
RxQPair = SA
RxQNoFeature = ND
PreEmptibleQ = UA
RxQLatency = UA
RxDynamicLB = NA
DynamicQPool = SA
MacLearnLB = NA
RSS = UA
LRO = SA
GeneveOAM = UA
LRO difference
Unsupported on Intel.
Supported on QLogic.
Net Queue balancer state Enabled Enabled
RX/TX ring buffer current
parameters
RX: 1024
RX Mini: 0
RX Jumbo: 0
TX: 1024
RX: 8192
RX Mini: 0
RX Jumbo: 0
TX: 8192
QLogic has deeper buffers
It should be advantage.
RX/TX ring buffer max values RX: 4096
RX Mini: 0
RX Jumbo: 0
TX: 4096
RX: 8192
RX Mini: 0
RX Jumbo: 0
TX: 8192
QLogic supports deeper
buffers
It should be advantage.
RSS NO
Not available in driver version
1.9.5
See.VMware HCL
YES
RSS: not set explicitly
RSS difference
ESXi with Intel NIC does not
support RSS.
ESXi with Qlogic NIC uses
RSS.
# of
NetPoll (RX) Threads **
16 32 For QLogic VMkernel more
software threads are
observed.
This should be advantage.
# of
NetPoll TX Threads / vmnic
***
1 TBD, most probably 1
Legend:
* S:supported, U:unsupported, N:not-applicable, A:allowed, D:disallowed
** NetPoll Threads (are software threads between VMNIC and VSWITCH for data transmit/receive
** TXPool Threads are software threads between VMNIC and VSWITCH for data transmit/receive
Questions and answers
Q: How NetQueue Net Filter Class MacOnly works? Is it load balancing based on source, destination
or both src-dst MAC addresses?
A: Based on destination MAC address.
Q: How to get and validate RSS information from Intel driver module?
A: This information is publicly documented in VMware HCL (VMware Compatibility Guide, aka VCG)
on particular NIC record. Technically, you can list driver module parameters by command esxcli
system module parameters list -m <DRIVER-MODULE-NAME> and check if there are RSS relevant
parameters.
Q: How to validate RSS is really active in NIC driver?
A: You can use ESXi shell command (for further details see section How to validate RSS is enabled in
VMkernel)
vsish -e cat /net/pNics/vmnic1/rxqueues/info
Suspicious and hypothesis
Based on observed behavior, it seems that the culprit is Qlogic NIC driver/firmware.
Suspicious #1: The problem might or might not be caused by some known or unknown bug in NIC
driver/firmware. Such bug can be related to TSO (LRO), CSO, VMDq/Netqueue, etc.
Hypothesis #2: Intel X710 with use driver (1.9.5), does not support RSS. However, QLogic supports
RSS which is enabled by default. RSS can be leveraged for advanced queueing, multi-threading and
load balancing potentially improving ingress and egress network throughput for a single VM across
NIC hardware queues and software threads within ESXi VMkernel because of using more CPU cycles
from physical CPU cores for network operations. However, Netqueue and Netqueue RSS brings
additional software complexity into NIC driver which some potential for a bug. See. KB
https://kb.vmware.com/s/article/68147 for example of such bug.
Hypothesis #3: Other software bug in QLogic driver/firmware affecting network throughput.
However, the problem can be totally different.
The root cause cannot be fully proven, and problem successfully resolved without test environment
(2x ESXi hosts with Intel X710, 2x ESXi hosts with QLogic FastLinQ QL41xxx) and real performance
testing.
Theory - NIC offload capabilities, queueing and multitasking
In this section, we are describing how different technologies works. Specific information from the
environment about Intel X710 and QLogic QL41xxx can be found in section “Diagnostic commands”.
ESXi hardware inventory
Before any troubleshooting or design validation, it is very good idea to collect hardware and ESXi
inventory details. Following commands can be used for inventory of tested system.
esxcli system version get
esxcli hardware platform get
esxcli hardware cpu global get
smbiosDump
WebBrowser https://192.168.4.121/cgi-bin/esxcfg-info.cgi
NIC Driver
Driver and firmware identification
Another important step is driver and firmware identification. The NIC details, firmware and driver
versions, and driver module name can be identified by command
esxcli network nic get -n vmnic1
where the output is like
[root@esx21:~] esxcli network nic get -n vmnic0
Advertised Auto Negotiation: true
Advertised Link Modes: Auto, 1000BaseT/Full, 100BaseT/Half, 100BaseT/Full,
10BaseT/Half, 10BaseT/Full
Auto Negotiation: true
Cable Type: Twisted Pair
Current Message Level: 7
Driver Info:
Bus Info: 0000:01:00:0
Driver: ntg3
Firmware Version: bc 1.39 ncsi 1.5.12.0
Version: 4.1.3.2
Link Detected: true
Link Status: Up
Name: vmnic0
PHYAddress: 0
Pause Autonegotiate: true
Pause RX: true
Pause TX: true
Supported Ports: TP
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: true
Transceiver: internal
Virtual Address: 00:50:56:51:8a:31
Wakeon: Disabled
It may come in handy to have more information available about the pNIC and the driver used if the
HCL has a lot of listings. In that case, you also need the hardware ID properties to make sure you are
looking at the correct driver in the HCL:
o Vendor-ID(VID)
o Device-ID(DID)
o Sub-Vendor-ID(SVID)
o Sub-Device-ID(SDID)
To extract that information from your ESXi host, you can use the command
vmkchdev –l | grep vmnic
This will list additional hardware ID information about the vmnics like …
[root@esx21:~] vmkchdev -l | grep vmnic
0000:01:00.0 14e4:165f 1028:1f5b vmkernel vmnic0
0000:01:00.1 14e4:165f 1028:1f5b vmkernel vmnic1
0000:02:00.0 14e4:165f 1028:1f5b vmkernel vmnic2
0000:02:00.1 14e4:165f 1028:1f5b vmkernel vmnic3
To understand what drivers are “Inbox” (aka native VMware) or “Async” (from partners like Intel or
Marvel/QLogic) you have to list vibs and check the vedor.
esxcli software vib list
[root@czchoes595:~] esxcli software vib list
Name Version Vendor Acceptance Level Install Date
----------------------------- ---------------------------------- --------- ---------------- ------------
lsi-mr3 7.706.08.00-1OEM.670.0.0.8169922 Avago VMwareCertified 2020-09-15
bnxtnet 214.0.230.0-1OEM.670.0.0.8169922 BCM VMwareCertified 2020-09-15
bnxtroce 214.0.187.0-1OEM.670.0.0.8169922 BCM VMwareCertified 2020-09-15
elx-esx-libelxima-8169922.so 12.0.1188.0-03 ELX VMwareCertified 2020-09-15
brcmfcoe 12.0.1278.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15
elxiscsi 12.0.1188.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15
elxnet 12.0.1216.4-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15
lpfc 12.4.270.6-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15
amsd 670.11.5.0-16.7535516 HPE PartnerSupported 2020-09-15
bootcfg 6.7.0.02-06.00.14.7535516 HPE PartnerSupported 2020-09-15
conrep 6.7.0.03-04.00.34.7535516 HPE PartnerSupported 2020-09-15
cru 670.6.7.10.14-1OEM.670.0.0.7535516 HPE PartnerSupported 2020-09-15
fc-enablement 670.3.50.16-7535516 HPE PartnerSupported 2020-09-15
hponcfg 6.7.0.5.5-0.18.7535516 HPE PartnerSupported 2020-09-15
ilo 670.10.2.0.2-1OEM.670.0.0.7535516 HPE PartnerSupported 2020-09-15
oem-build 670.U3.10.5.5-7535516 HPE PartnerSupported 2020-09-15
scsi-hpdsa 5.5.0.68-1OEM.550.0.0.1331820 HPE PartnerSupported 2020-09-15
smx-provider 670.03.16.00.3-7535516 HPE VMwareAccepted 2020-09-15
ssacli 4.17.6.0-6.7.0.7535516.hpe HPE PartnerSupported 2020-09-15
sut 6.7.0.2.5.0.0-83 HPE PartnerSupported 2020-09-15
testevent 6.7.0.02-01.00.12.7535516 HPE PartnerSupported 2020-09-15
i40en 1.9.5-1OEM.670.0.0.8169922 INT VMwareCertified 2020-09-15
The command below can help you to list driver module parameters used for advanced settings.
esxcli system module parameters list -m <DRIVER-MODULE-NAME>
Equivalent command with little bit more details
vmkload_mod -s <DRIVER-MODULE-NAME>
Command esxcfg-module can show more information for particular VMkernel module.
Historically there is another command (esxcfg-module) to work with VMkernel modules. For
example, to show detail module info, you can use
esxcfg-module -i ntg3
NIC Driver update
Run this command to install drivers from the VIB file
esxcli software vib install -v /path/async-driver.vib
For more information about installing async drivers in ESXi 5.x and 6.x using esxcli and async driver
VIB file, see VMware KB 2137854 at https://kb.vmware.com/s/article/2137854?lang=en_us
TSO (aka LRO)
TCP Segmentation Offload (TSO) is the equivalent to TCP/IP Offload Engine (TOE) but more modeled
on virtual environment, where TOE is the actual NIC vendor hardware enhancement. It is also known
as Large Segment Offload (LSO).
To fully benefit from the performance enhancement, you must enable TSO along the complete data
path on an ESXi host. If TSO is supported on the NIC it is enabled by default. The same goes for TSO
in the VMkernel layer and for the VMXNET3 VM adapter but not per se for the TSO configuration
within the guest OS.
Large Receive Offload (LRO) can be seen as the exact opposite feature to TSO/LSO. It is a technique
that aggregates multiple inbound network packets from a single stream into larger packets and
transfers the resulting larger, but fewer packets to the network stack of the host or VM guest OS TCP
stack. This process results in less CPU overhead because the CPU has fewer packets to process
compared to LRO being disabled.
The important trade-off with LRO is that it lowers CPU overhead and potentially improves network
throughput, but adds latency to the network stack. The potential higher latency introduced by LRO is
a result of the time spent aggregating smaller TCP segments into a larger segment.
LRO in the ESXi host
By default, a host is configured to use hardware TSO if its NICs support the feature.
To check the LRO configuration for the default TCP/IP stack on the ESXi host, execute the following
command to display the current LRO configuration values:
esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
You are able to check the length of the LRO buffer by using the following esxcli command:
esxcli system settings advanced list - o /Net/VmxnetLROMaxLength
The LRO features are functional for the guest OS when the VMXNET3 virtual adapter is used. To
check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO, software
LRO) can be issued:
esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
esxcli system settings advanced list -o /Net/Vmxnet3SwLRO
You can disable LRO for all VMkernel adapters on a host with command
esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 0
and enabling LRO with
esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 1
LRO in VMXNET3
Large Receive Offload (LRO) feature of VMXNET3 helps deliver high throughput with lower CPU
utilization is Large Receive Offload (LRO), which aggregates multiple received TCP segments into a
larger TCP segment before delivering it up to the guest TCP stack. However, for latency-sensitive
applications that rely on TCP, the time spent aggregating smaller TCP segments into a larger one
adds latency. It can also affect TCP algorithms like delayed ACK, which now cause the TCP stack to
delay an ACK until the two larger TCP segments are received, also adding to end-to-end latency of
the application. Therefore, you should also consider disabling LRO if your latency-sensitive
application relies on TCP.
To do so for Linux guests, you need to reload the VMXNET3 driver in the guest:
shell# modprobe -r vmxnet3
Add the following line in /etc/modprobe.conf (Linux version dependent):
options vmxnet3 disable_lro=1
Then reload the driver using:
shell# modprobe vmxnet3
CSO (Checksum Offload)
Checksum Offload (CSO) or TCP Checksum Offloading (TCO) eliminates the host overhead introduced
by check-summing for TCP packets. With checksum offloading enabled, checksum calculations are
allowed on the NIC chipset.
The following command provides information about the checksum offload settings on your ESXi
host:
esxcli network nic cso get
VMDq / NetQueue
VMDq (Virtual Machine Device Queues) is the hardware feature, NetQueue is the feature baked
into vSphere. Intel’s Virtual Machine Device queues (VMDq) is the hardware component used by
VMware’s NetQueue software feature since ESX 3.5. VMDq is a silicon-level technology that can
offload network I/O management burden from ESXi.
Dynamic NetQueue was introduced with the release of ESXi 5.5. Multiple queues and sorting
intelligence in the chipset support enhanced network traffic flow in the virtual environment and by
doing so freeing processor cycles for application workloads. This improves efficiency in data
transactions toward the destined VM, and increases overall system performance.
The VMDq feature, in collaboration with VMware Dynamic NetQueue, allows network packets to be
distributed over different queues. Each queue gets its own ESXi thread for packet processing. One
ESXi thread represents a CPU core. When data packets are received on the network adapter, a layer-
2 classifier in the VMDq enabled network controller sorts and determines which VM each packet is
destined for. After setting the classifier it places the packet in the receive queue assigned to that
VM. ESXi is now only responsible for transferring the packets to the respective VM rather than doing
the heavy lifting of sorting the packets on the incoming network streams. That is how VMDq and
NetQueue manage to deliver efficiency for CPU utilization in your ESXi host.
NetQueue is enabled by default when supported by the underlying network adapter.
Net Filter Classes
Following esxcli command gives information about the filters supported per vmnic and used by
NetQueue.
esxcli network nic queue filterclass list
Net Queue Support
Get netqueue support on VMkernel
esxcli system settings kernel list | grep netNetqueueEnabled
[root@esx21:~] esxcli system settings kernel list | grep netNetqueueEnabled
netNetqueueEnabled Bool TRUE TRUE TRUE Enable/Disable NetQueue support.
The output contains Name, Type, Configured, Runtime, Default, Description.
Net Queue Count
Following command is used to get netqueue count on a NICs
esxcli network nic queue count get
This will output the current queues for all vmnics in your ESXi host.
List the load balancer settings
List the load balancer settings of all the installed and loaded physical NICs. (S:supported,
U:unsupported, N:not-applicable, A:allowed, D:disallowed).
esxcli network nic queue loadbalancer list
Details of netqueue balancer plugins
Details of netqueue balancer plugins on all physical NICs currently installed and loaded on the
system
esxcli network nic queue loadbalancer plugin list
Net Queue balancer state
Netqueue balancer state of all physical NICs currently installed and loaded on the system
esxcli network nic queue loadbalancer state list
RX/TX ring buffer current parameters
The ring is the representation of the device RX/TX queue. It is used for data transfer between the
kernel stack and the device.
Get current RX/TX ring buffer parameters of a NIC
esxcli network nic ring current get
RX/TX ring buffer parameters max values
The ring is the representation of the device RX/TX queue. It is used for data transfer between the
kernel stack and the device.
Get preset maximums for RX/TX ring buffer parameters of a NIC.
esxcli network nic ring preset get -n vmnic0
SG (Scatter and Gather)
Scatter and Gather (Vectored I/O) is a concept that was primarily used in hard disks and it enhances
large I/O request performance, if supported by the hardware. The best explanation I have found
about Scatter and Gather is available at https://stackoverflow.com/questions/9770125/zero-copy-
with-and-without-scatter-gather-operations. In general, it is about minimizing CPU cycles for I/O
traffic.
esxcli network nic sg get
List software simulation settings
List software simulation settings of physical NICs currently installed and loaded on the system.
esxcli network nic software list
RSS – 5-tuple hash queue load balancing
What is RSS?
Receive Side Scaling (RSS) has the same basic functionality that (Dynamic) NetQueue supports, it
provides load balancing in processing received network packets. RSS resolves the single-thread
bottleneck by allowing the receive side network packets from a pNIC to be shared across multiple
CPU cores.
The big difference between VMDq and RSS is that RSS uses more sophisticated filters to balance
network I/O load over multiple threads. Depending on the pNIC and its RSS support, RSS can use up
to a 5-tuple hash to determine the queues to create and distribute network IO’s over.
A 5-tuple hash consists of the following data:
o SourceIP
o DestinationIP
o Sourceport
o Destinationport
o Protocol
Why use RSS?
Receive Side Scaling (RSS) is a feature that allows network packets from a single NIC to be scheduled
in parallel on multiple CPUs by creating multiple hardware queues. While this might increase
network throughput for a NIC that receives packets at a high rate, it can also increase CPU overhead.
When using certain 10Gb/s or 40Gb/s Ethernet physical NICs, ESXi allows the RSS capabilities of the
physical NICs to be used by the virtual NICs. This can be especially useful in environments where a
single MAC address gets large amounts of network traffic (for example VXLAN or network-intensive
virtual appliances). Because of the potential for increased CPU overhead, this feature is disabled by
default.
RSS Design Considerations
Does it make sense to enable RSS?
Usually not, because single VM throughput between 3 and 6 Gbps is typically good enough even you
use multiple 25 Gb or even 100 Gb NICs per ESXi hosts.
What is the impact of enabling RSS end to end for all VMs in ESXi host?
Well, it would unlock the bandwidth for all VMs but in cost of additional physical CPU cycles and
software threads in VMkernel. It all depends on your particular requirements but more often you
should disable end to end RSS per particular VM where huge network throughput is required, such
as
• Virtual Machines used for NFV (Network Function Virtualization) like
o NSX-T Edge Nodes where VMware is providing NSX network functions like North-
South routing, Load Balancing, NAT, etc.
o Virtualized Load balancers
o Other vendors virtual appliances for NFV
• Virtual Machines used for Software Defined Storages where huge data transfers over
network are required
• Container hosts
• Virtualized Big data engines
• Etc.
RSS Implementation details
The pNIC chipset matters because of the way the RSS feature scales its queues. Depending on the
vendor and model, there are differences in how RSS is supported and how queues are scaled.
If RSS is enabled in pNIC driver and VMkernel depends on specific NIC driver. It needs to be enabled
in the driver module and it depends on the used driver module if the RSS parameters should be set
to enable it or the driver enables RSS by default.
The problem with the driver module settings is that it is not always clear on what values to use in the
configuration. The description of the driver module parameters differs a lot among the various driver
modules. That won’t be a problem if the value of choice is either zero or one, but it is when you are
expected to configure a certain number of queues. The RSS driver module settings are a perfect
example of this.
The driver details are given when executing the command
esxcli system module parameters list -m ixgbe | grep "RSS"
The result of the number of queues configured determines how many CPU cycles can be consumed
by incoming network traffic, therefore, RSS enablement should be decided based on specific
requirements. If single VM (vNIC) need to receive large network throughput requiring more CPU
cycles, RSS should be enabled on ESXi host. If single thread is enough even for the largest required
throughput, RSS can be disabled. Now the obvious question is, how to check RSS is supported and
enabled.
How to validate RSS is supported
There are two ways how to validate RSS is supported by particular NIC driver.
1. Check NIC adapter on VMware HCL (http://vmware.com/go/hcl) and look at particular driver
version. See two examples below.
Intel i40en
Marvell - QLogic qedentv
2. List driver module parameters and check if there are some RSS related parameters
Intel i40en driver do not list anything RSS related, probably because no support
I have been assured by VMware Engineering that inbox driver i40en 1.9.5 does not
support RSS
How to validate RSS is enabled in VMkernel
If you have running system, you can check the status of RSS by following command from ESXi shell
vsish -e cat /net/pNics/vmnic1/rxqueues/info
In figure below, you can see the command output for 1Gb Intel NIC not supporting NetQueue,
therefore RSS is logically not supported as well, because it does not make any sense.
Figure 1 Command to validate if RSS is enabled in VMkernel
It seems, that some drivers enabling RSS by default and some others not.
How to explicitly enable RSS in the NIC driver
The procedure to enable RSS is always dependent on specific driver, because specific parameters
have to be passed to driver module. The information how to enable RSS for particular driver should
be written in specific NIC vendor documentation.
Example for Intel ixgbe driver:
vmkload_mod ixgbe RSS=”4″
To enable the feature on multiple Intel 82599EB SFI/SFP+ 10Gb/s NICs, include another comma-
separated 4 for each additional NIC (for example, to enable the feature on three such NICs, you'd
run vmkload_mod ixgbe RSS="4,4,4").
Example for Mellanox nmlx4driver:
For Mellanox adapters, the RSS feature can be turned on by reloading the driver with
num_rings_per_rss_queue=4.
vmkload_mod nmlx4_en num_rings_per_rss_queue=4
NOTE: After loading the driver with vmkload_mod, you should make vmkdevmgr rediscover the NICs
with the following command:
kill -HUP ID … where ID is the process ID of the vmkdevmgr process
RSS in Virtual Machine settings
Additional advanced settings must be added into .vmx or advanced config of particular VM to enable
mutlti-queue support. These settings are
• ethernetX.pnicFeatures
• ethernetX.ctxPerDev
• ethernetX.udpRSS
Let’s explain the purpose of each advanced setting above.
To enable multi-queue (NetQueue RSS) in particular VM vNIC
ethernetX.pnicFeatures = “4”
To allow multiple (2 to 8 ) TX threads for particular VM vNIC
ethernetX.ctxPerDev = “3”
To boost RSS performance, the vSphere 6.7 release includes vmxnet3 version 4, which supports
some new features, including Receive Side Scaling (RSS) for UDP, RSS for ESP, and offload for
Geneve/VXLAN. Performance tests reveal significant improvement in throughput.
ethernetX.udpRSS = “1”
These improvements are beneficial for HPC financial service workloads that are sensitive to
networking latency and bandwidth.
The vSphere 6.7 release includes vmxnet3 version 4, which supports some new features.
• RSS for UDP - Receive side scaling (RSS) for the user data protocol (UDP) is now available in
the vmxnet3 v4 driver. Performance testing of this feature showed a 28% improvement in
receive packets per second. The test used 64-byte packets and four receive queues.
• RSS for ESP – RSS for encapsulating security payloads (ESP) is now available in the vmxnet3
v4 driver. Performance testing of this feature showed a 146% improvement in receive
packets per second during a test that used IPSec and four receive queues.
• Offload for Geneve/VXLAN – Generic network virtualization encapsulation (Geneve) and
VXLAN offload is now available in the vmxnet3 v4 driver. Performance testing of this feature
showed a 415% improvement in throughput in a test that used a packet size of 64 K with
eight flows.
Source:
https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance
/whats-new-vsphere67-perf.pdf
RSS in the Guest OS (vRSS)
To fully make use of the RSS mechanism, an end-to-end implementation is recommended. That
means you will need to enable and configure RSS in the guest OS in addition to the VMkernel driver
module. Multi-queuing is enabled by default in Linux guest OS when the latest VMware tools
version (version 1.0.24.0 or later) is installed or when the Linux VMXNET3 driver version 1.0.16.0-k
or later is used. Prior to these versions, you were required to manually enable multi-queue or RSS
support. Be sure to check the driver and version used to verify if your Linux OS has RSS support
enabled by default. Driver version within linux OS can be checked by following command:
# modinfo vmxnet3
You can determine the number of Tx and Rx queues allocated for a VMXNET3 driver on by running
the ethtool console command in the Linux guest operating system:
ethtool -S ens192
How to disable Netqueue RSS for particular driver
Disabling Netqueue RSS is also driver specific. It can be done using driver module parameter as
shown below. The example assumes there are four qedentv (QLogic NIC) instances.
[root@host:~] esxcfg-module -g qedentv
qedentv enabled = 1 options = ''
[root@host:~] esxcfg-module -s "num_queues=0,0,0,0 RSS=0,0,0,0" qedentv
[root@host:~] esxcfg-module -g qedentv
qedentv enabled = 1 options = 'num_queues=0,0,0,0 RSS=0,0,0,0'
Reboot the system for settings to take effect and will apply to all NICs managed by the qedentv
driver.
Source: https://kb.vmware.com/s/article/68147
How to disable RSS load balancing plugin in VMkernel
Particular Netqueue load balancing plugin can be totally disabled in VMkernel. Here is an example
how to disable RSS load balancing
esxcli network nic queue loadbalancer set --rsslb=false -n vmnicX
You can do it for example when toubleshooting some software issue with the load-based netqueue
balancer module like described in VMware KB https://kb.vmware.com/s/article/58874
How to disable Netqueue in VMkernel
Netqueue can be totally disabled in VMkernel
esxcli system settings kernel set --setting="netNetqueueEnabled" --value="FALSE"
It should be noted that disabling netqueue will result in some performance impact. The magnitude
of the impact will depend on individual workloads and should be characterized before deploying the
workaround in production.
VMKernel multi-threading
System performance is not only about the hardware capabilities but also about the software
capabilities. To achieve higher throughput, parallel operations are required, which is in software
achieved by multi-threading.
RX Threads
Each pNIC in ESXi host is equipped with one Rx (Netpoll) thread by default. ESXi VMkernel threads
and queues without VMDq and NetQueue are depicted on the figure below.
Figure 2 ESXi VMkernel threads and queues without VMDq and NetQueue
However, when the pNIC supports the NetQueue or RSS feature, the Netpoll threads are scaled.
Dynamic NetQueue starts some number of RX threads (NetPoll Threads) and share it for all VMD
queues until it requires more CPU resources. If more resources are required, additional RX
NetQueues (NetPoll) threads are automatically created. The reason for such behavior is to avoid
resource waste (CPU, RAM) unless it is necessary.
On figure below, you see ESXi VMkernel with NetQueue and RSS enabled. When RSS is enabled end-
to-end, multiple RX threads are leveraged per VMD/RSS queue. Such multi-threading can boos
receive network traffic.
Figure 3 VMkernel network threads on NICs supporting VMDq and RSS is enabled
RSS is all about receive network traffic. However, transmit part of the story should be considered as
well. This where TX Threads come in to play.
TX Threads
As you can see in the “Figure 3 VMkernel network threads on NICs supporting VMDq”, ESXi
VMkernel is using only one Transmit (Tx) thread per VM by default. Advanced VM settings has to be
used to scaling up Tx Threads and leverage more physical CPU Cores to generate higher transmit
network throughput from particular VM.
Figure 4 VMkernel network threads on NICs supporting VMDq /w RSS and one VM (VM2) configured for TX Thread scaling
Without advanced VM settings for TX Thread scaling, you can see one transmit (TX) thread per VM
vNIC (sys - NetWorld), even VM has for example 24 vCPUs as depicted in the output below.
{"name": "vmname-example-01.eth1", "switch": "DvsPortset-0", "id": 50331663, "mac": "00:50:56:9a:41:6e", "rxmode": 0,
"tunemode": 0, "uplink": "false", "ens": false,
"txpps": 7872, "txmbps": 30.7, "txsize": 488, "txeps": 0.00, "rxpps": 7161, "rxmbps": 23.2, "rxsize": 404, "rxeps": 0.00,
"vnic": { "type": "vmxnet3", "ring1sz": 1024, "ring2sz": 256, "tsopct": 0.1, "tsotputpct": 1.4, "txucastpct": 100.0, "txeps":
0.0,
"lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 100.0, "rxeps": 0.0,
"maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0,
"txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0},
"rxqueue": { "count": 8},
"txqueue": { "count": 8},
"intr": { "count": 9 },
"sys": [ "2106431" ],
"vcpu": [ "2106668", "2106670", "2106671", "2106672", "2106673", "2106674", "2106675", "2106676", "2106677",
"2106678", "2106679", "2106680", "2106681", "2106682", "2106683", "2106684", "2106685", "2106686", "2106687",
"2106688", "2106689", "2106690", "2106691", "2106692" ]},
Multi-threading configuration procedure
To take full advantage of the Network Adapter's RSS capabilities, you must enable Receive Side
Scaling (RSS) end-to-end
in the ESXi host (NIC driver setting) to balance the CPU load across multiple Receive (RX)
Threads leveraging multiple cores
In the particular VM where RSS is required
You may ask why such double step configuration is required. The reason for such explicit
configuration is to avoid unwanted system overhead. Usually, RSS is not required for all VMs, but
only for demanding VMs.
Here is the procedure to enable RSS multi-threading
Prerequisite: RSS must be enabled on ESXi NIC driver module. For details see section “How to validate
RSS is enabled in VMkernel
If you have running system, you can check the status of RSS by following command from ESXi shell
vsish -e cat /net/pNics/vmnic1/rxqueues/info
In figure below, you can see the command output for 1Gb Intel NIC not supporting NetQueue,
therefore RSS is logically not supported as well, because it does not make any sense.
Figure 1 Command to validate if RSS is enabled in VMkernel
It seems, that some drivers enabling RSS by default and some others not.
1. How to explicitly enable RSS”
2. Additional advanced settings must be added into .vmx or advanced config of particular VM:
# Enable multi-queue (NetQueue RSS)
ethernetX.pnicFeatures = “4”
# Allow multiple TX threads for single vm
ethernetX.ctxPerDev = “3”
Acceptable values for EthernetX.ctxPerDev are 1, 2, or 3 where,
• Setting it to 1 results in 1 CPU thread per device vNIC
• It is set to 2 as Default value which means 1 transmit CPU thread per VM
• Setting it to 3 results in 2 to 8 CPU threads per device vNIC
Default EthernetX.ctxPerDev value is 2, which means only one transmit (Tx) thread per whole VM.
For VM with RSS, we want to use the value 3, because even we have a single vNIC object in virtual
machine, from the vSphere administrator perspective, we want more TX Threads (up to 8). The real
number of TX Threads depends on link-speed. For 100Gb link, number of TX queues is 8.
VMkernel software threads per VMNIC can be identified by two following commands
net-stats -A -t vW
vsish
/> cat /world/<WORLD-ID-1-IN-VMNIC>/name
/> cat /world/<WORLD-ID-2-IN-VMNIC>/name
/> cat /world/<WORLD-ID-3-IN-VMNIC>/name
…
/> cat /world/<WORLD-ID-n-IN-VMNIC>/name
net-stats -A -t vW command displays the VMkernel network threads
[root@esx21:~] net-stats -A -t vW
Below is the sample net-stat -A -t vW output form ESXi host for educational purposes.
To check if multiple software treads are used for vmnic0, use highlighted information below.
[root@esx21:~] net-stats -A -t vW
{ "sysinfo": { "hostname": "esx21.home.uw.cz" },
"stats": [
{ "time": 1602715000, "interval": 10, "iteration": 0,
"ports":[
{"name": "vmnic0", "switch": "DvsPortset-1", "id": 33554434, "mac": "90:b1:1c:13:fc:14", "rxmode": 0, "tunemode": 2, "uplink": "true",
"ens": false,
"txpps": 79, "txmbps": 0.9, "txsize": 1446, "txeps": 0.00, "rxpps": 80, "rxmbps": 1.2, "rxsize": 1881, "rxeps": 0.00,
"vmnic": {"devname": "vmnic0.ntg3",
"txpps": 79, "txmbps": 0.9, "txsize": 1500, "txeps": 0.00, "rxpps": 80, "rxmbps": 1.3, "rxsize": 1959, "rxeps": 0.00 },
"sys": [ "2097643", "2097863", "2097864" ]},
{"name": "vmnic1", "switch": "DvsPortset-1", "id": 33554436, "mac": "90:b1:1c:13:fc:15", "rxmode": 0, "tunemode": 2, "uplink": "true",
"ens": false,
"txpps": 35, "txmbps": 1.5, "txsize": 5318, "txeps": 0.00, "rxpps": 54, "rxmbps": 0.2, "rxsize": 409, "rxeps": 0.00,
"vmnic": {"devname": "vmnic1.ntg3",
"txpps": 35, "txmbps": 1.6, "txsize": 5955, "txeps": 0.00, "rxpps": 54, "rxmbps": 0.2, "rxsize": 479, "rxeps": 0.00 },
"sys": [ "2097644", "2097867", "2097868" ]},
{"name": "vmk0", "switch": "DvsPortset-1", "id": 33554438, "mac": "90:b1:1c:13:fc:14", "rxmode": 0, "tunemode": 0, "uplink": "false",
"ens": false,
"txpps": 36, "txmbps": 1.5, "txsize": 5130, "txeps": 0.00, "rxpps": 46, "rxmbps": 0.2, "rxsize": 475, "rxeps": 0.00,
"ipv4": { "txpps": 119, "txeps": 0, "txfrags": 0, "rxpps": 118, "rxdeliv": 118, "rxeps": 0, "rxreass": 0 },
"ipv6": { "txpps": 0, "txeps": 0, "txfrags": 0, "rxpps": 0, "rxdeliv": 0, "rxeps": 0, "rxreass": 0 },
"tcptx": { "pps": 108, "datapct": 73.4, "mbps": 2.4, "size": 2753, "delackpct": 30.8, "rexmit": 0.0, "sackrexmit": 0.0, "winprb": 0.0,
"winupd": 0.8 },
"tcprx": { "pps": 115, "datapct": 70.6, "mbps": 1.4, "size": 1467, "dups": 0.0, "oo": 0.0, "winprb": 0.0, "winupd": 1.0, "othereps": 0.0},
"udp": {"txpps": 10, "rxpps": 3, "rxsockeps": 0.0, "rxothereps": 0.0},
"sys": [ "2097832", "2097833" ]},
{"name": "vmk1", "switch": "DvsPortset-1", "id": 33554439, "mac": "00:50:56:6e:d1:d0", "rxmode": 0, "tunemode": 0, "uplink": "false",
"ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00,
"sys": [ "2097834", "2097835" ]},
{"name": "vmk2", "switch": "DvsPortset-1", "id": 33554440, "mac": "00:50:56:60:e9:3a", "rxmode": 0, "tunemode": 0, "uplink": "false",
"ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00,
"sys": [ "2097836", "2097837" ]},
{"name": "vmk3", "switch": "DvsPortset-1", "id": 33554441, "mac": "00:50:56:65:4c:04", "rxmode": 0, "tunemode": 0, "uplink": "false",
"ens": false,
"txpps": 1, "txmbps": 0.0, "txsize": 678, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 114, "rxeps": 0.00,
"sys": [ "2097838", "2097839" ]},
{"name": "vmk4", "switch": "DvsPortset-1", "id": 33554442, "mac": "00:50:56:6e:e8:f2", "rxmode": 0, "tunemode": 0, "uplink": "false",
"ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00,
"sys": [ "2097840", "2097841" ]},
{"name": "vmk5", "switch": "DvsPortset-1", "id": 33554443, "mac": "00:50:56:6c:28:c7", "rxmode": 0, "tunemode": 0, "uplink": "false",
"ens": false,
"txpps": 71, "txmbps": 0.9, "txsize": 1564, "txeps": 0.00, "rxpps": 64, "rxmbps": 1.2, "rxsize": 2278, "rxeps": 0.00,
"sys": [ "2097842", "2097843" ]},
{"name": "openmanage_enterprise.x86_64-0.0.1.eth0", "switch": "DvsPortset-1", "id": 33554444, "mac": "00:50:56:92:d9:c8", "rxmode":
0, "tunemode": 0, "uplink": "false", "ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 3, "rxmbps": 0.0, "rxsize": 60, "rxeps": 0.00,
"vnic": { "type": "vmxnet3", "ring1sz": 1024, "ring2sz": 256, "tsopct": 0.0, "tsotputpct": 0.0, "txucastpct": 0.0, "txeps": 0.0,
"lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 0.0, "rxeps": 0.0,
"maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0,
"txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0},
"rxqueue": { "count": 4},
"txqueue": { "count": 4},
"intr": { "count": 5 },
"sys": [ "2101072" ],
"vcpu": [ "2101082", "2101084", "2101085", "2101086" ]},
{"name": "flb-mgr.eth0", "switch": "DvsPortset-1", "id": 33554445, "mac": "00:50:56:92:6f:3e", "rxmode": 0, "tunemode": 0, "uplink":
"false", "ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00,
"vnic": { "type": "vmxnet3", "ring1sz": 256, "ring2sz": 256, "tsopct": 0.0, "tsotputpct": 0.0, "txucastpct": 0.0, "txeps": 0.0,
"lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 0.0, "rxeps": 0.0,
"maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0,
"txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0},
"rxqueue": { "count": 1},
"txqueue": { "count": 1},
"intr": { "count": 2 },
"sys": [ "2101254" ],
"vcpu": [ "2101271", "2101273" ]},
{"name": "flb-node-001.eth0", "switch": "DvsPortset-1", "id": 33554446, "mac": "00:50:56:92:55:fa", "rxmode": 0, "tunemode": 0,
"uplink": "false", "ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00,
"vnic": { "type": "vmxnet3", "ring1sz": 256, "ring2sz": 256, "tsopct": 0.0, "tsotputpct": 0.0, "txucastpct": 0.0, "txeps": 0.0,
"lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 0.0, "rxeps": 0.0,
"maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0,
"txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0},
"rxqueue": { "count": 1},
"txqueue": { "count": 1},
"intr": { "count": 2 },
"sys": [ "2101257" ],
"vcpu": [ "2101274", "2101276" ]},
{"name": "W2K8R2-diag02.eth0", "switch": "DvsPortset-1", "id": 33554447, "mac": "00:50:56:92:f3:88", "rxmode": 0, "tunemode": 0,
"uplink": "false", "ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00,
"sys": [ "2101283" ],
"vcpu": [ "2101290", "2101292" ]},
{"name": "Photon-01-ch01.home.uw.cz.eth0", "switch": "DvsPortset-1", "id": 33554448, "mac": "00:50:56:a8:49:0e", "rxmode": 0,
"tunemode": 0, "uplink": "false", "ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 3, "rxmbps": 0.0, "rxsize": 68, "rxeps": 0.00,
"vnic": { "type": "vmxnet3", "ring1sz": 256, "ring2sz": 128, "tsopct": 0.0, "tsotputpct": 0.0, "txucastpct": 0.0, "txeps": 0.0,
"lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 0.0, "rxeps": 0.0,
"maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0,
"txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0},
"rxqueue": { "count": 1},
"txqueue": { "count": 1},
"intr": { "count": 2 },
"sys": [ "2101597" ],
"vcpu": [ "2101746" ]},
{"name": "FreeBSD-01-is02.home.uw.cz.eth0", "switch": "DvsPortset-1", "id": 33554449, "mac": "00:0c:29:fd:04:87", "rxmode": 0,
"tunemode": 0, "uplink": "false", "ens": false,
"txpps": 2, "txmbps": 0.0, "txsize": 124, "txeps": 0.00, "rxpps": 5, "rxmbps": 0.0, "rxsize": 74, "rxeps": 0.00,
"sys": [ "2101595" ],
"vcpu": [ "2101748" ]},
{"name": "FreeBSD-01-is02.home.uw.cz.eth1", "switch": "DvsPortset-1", "id": 33554450, "mac": "00:50:56:92:ad:99", "rxmode": 0,
"tunemode": 0, "uplink": "false", "ens": false,
"txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00,
"sys": [ "2101595" ],
"vcpu": [ "2101748" ]},
{"name": "vc01.home.uw.cz.eth0", "switch": "DvsPortset-1", "id": 33554453, "mac": "00:0c:29:8f:9e:1e", "rxmode": 0, "tunemode": 0,
"uplink": "false", "ens": false,
"txpps": 11, "txmbps": 0.0, "txsize": 267, "txeps": 0.00, "rxpps": 12, "rxmbps": 0.0, "rxsize": 505, "rxeps": 0.00,
"vnic": { "type": "vmxnet3", "ring1sz": 4096, "ring2sz": 4096, "tsopct": 0.9, "tsotputpct": 13.3, "txucastpct": 99.1, "txeps": 0.0,
"lropct": 9.9, "lrotputpct": 72.7, "rxucastpct": 71.9, "rxeps": 0.0,
"maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0,
"txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0},
"rxqueue": { "count": 4},
"txqueue": { "count": 4},
"intr": { "count": 5 },
"sys": [ "2104032" ],
"vcpu": [ "2104245", "2104248", "2104249", "2104250" ]}
],
"storage": {
},
"vcpus": {
"2101082": {"id": 2101082, "used": 2.29, "ready": 0.14, "cstp": 0.00, "name": "vmx-vcpu-0:openmanage_enterprise.x86_64-0.0.1",
"sys": 0.00, "sysoverlap": 0.02, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 5, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2101084": {"id": 2101084, "used": 1.43, "ready": 0.06, "cstp": 0.00, "name": "vmx-vcpu-1:openmanage_enterprise.x86_64-0.0.1",
"sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2101085": {"id": 2101085, "used": 2.06, "ready": 0.09, "cstp": 0.00, "name": "vmx-vcpu-2:openmanage_enterprise.x86_64-0.0.1",
"sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2101086": {"id": 2101086, "used": 1.54, "ready": 0.07, "cstp": 0.00, "name": "vmx-vcpu-3:openmanage_enterprise.x86_64-0.0.1",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2101271": {"id": 2101271, "used": 0.24, "ready": 0.04, "cstp": 0.00, "name": "vmx-vcpu-0:flb-mgr",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0,
"latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] },
"2101273": {"id": 2101273, "used": 0.11, "ready": 0.02, "cstp": 0.00, "name": "vmx-vcpu-1:flb-mgr",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0,
"latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] },
"2101274": {"id": 2101274, "used": 0.24, "ready": 0.04, "cstp": 0.00, "name": "vmx-vcpu-0:flb-node-001",
"sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 5, "miginterl3": 0,
"latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] },
"2101276": {"id": 2101276, "used": 0.21, "ready": 0.02, "cstp": 0.00, "name": "vmx-vcpu-1:flb-node-001",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0,
"latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] },
"2101290": {"id": 2101290, "used": 0.76, "ready": 0.07, "cstp": 0.00, "name": "vmx-vcpu-0:W2K8R2-diag02",
"sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 5, "miginterl3": 0,
"latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] },
"2101292": {"id": 2101292, "used": 0.19, "ready": 0.09, "cstp": 0.00, "name": "vmx-vcpu-1:W2K8R2-diag02",
"sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 5, "miginterl3": 0,
"latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] },
"2101746": {"id": 2101746, "used": 0.55, "ready": 0.04, "cstp": 0.00, "name": "vmx-vcpu-0:Photon-01-ch01.home.uw.cz",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 4, "miginterl3": 0,
"latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] },
"2101748": {"id": 2101748, "used": 0.51, "ready": 0.06, "cstp": 0.00, "name": "vmx-vcpu-0:FreeBSD-01-is02.home.uw.cz",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0,
"latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] },
"2104245": {"id": 2104245, "used": 11.60, "ready": 0.21, "cstp": 0.00, "name": "vmx-vcpu-0:vc01.home.uw.cz",
"sys": 0.00, "sysoverlap": 0.04, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 8, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2104248": {"id": 2104248, "used": 13.66, "ready": 0.16, "cstp": 0.00, "name": "vmx-vcpu-1:vc01.home.uw.cz",
"sys": 0.00, "sysoverlap": 0.05, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 6, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2104249": {"id": 2104249, "used": 12.54, "ready": 0.16, "cstp": 0.00, "name": "vmx-vcpu-2:vc01.home.uw.cz",
"sys": 0.00, "sysoverlap": 0.05, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 6, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2104250": {"id": 2104250, "used": 13.03, "ready": 0.18, "cstp": 0.00, "name": "vmx-vcpu-3:vc01.home.uw.cz",
"sys": 0.00, "sysoverlap": 0.05, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 6, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"3550389": {"id": 3550389, "used": 0.01, "ready": 0.00, "cstp": 0.00, "name": "net-stats",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0 } },
"sys": {
"2097643": {"id": 2097643, "used": 0.19, "ready": 0.05, "cstp": 0.00, "name": "vmnic0-pollWorld-0",
"sys": 0.00, "sysoverlap": 0.11, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097644": {"id": 2097644, "used": 0.09, "ready": 0.02, "cstp": 0.00, "name": "vmnic1-pollWorld-0",
"sys": 0.00, "sysoverlap": 0.04, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097832": {"id": 2097832, "used": 0.08, "ready": 0.01, "cstp": 0.00, "name": "vmk0-rx-0",
"sys": 0.00, "sysoverlap": 0.04, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097833": {"id": 2097833, "used": 0.05, "ready": 0.02, "cstp": 0.00, "name": "vmk0-tx",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097834": {"id": 2097834, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk1-rx-0",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097835": {"id": 2097835, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk1-tx",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097836": {"id": 2097836, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk2-rx-0",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097837": {"id": 2097837, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk2-tx",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097838": {"id": 2097838, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk3-rx-0",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097839": {"id": 2097839, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk3-tx",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097840": {"id": 2097840, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk4-rx-0",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097841": {"id": 2097841, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk4-tx",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097842": {"id": 2097842, "used": 0.11, "ready": 0.02, "cstp": 0.00, "name": "vmk5-rx-0",
"sys": 0.00, "sysoverlap": 0.07, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097843": {"id": 2097843, "used": 0.08, "ready": 0.03, "cstp": 0.00, "name": "vmk5-tx",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097863": {"id": 2097863, "used": 0.05, "ready": 0.04, "cstp": 0.00, "name": "hclk-sched-vmnic0",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 6, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097864": {"id": 2097864, "used": 0.00, "ready": 0.01, "cstp": 0.00, "name": "hclk-watchdog-vmnic0",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2097867": {"id": 2097867, "used": 0.02, "ready": 0.02, "cstp": 0.00, "name": "hclk-sched-vmnic1",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2097868": {"id": 2097868, "used": 0.00, "ready": 0.01, "cstp": 0.00, "name": "hclk-watchdog-vmnic1",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0,
"latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] },
"2101283": {"id": 2101283, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101282",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2101072": {"id": 2101072, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101071",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2101254": {"id": 2101254, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101253",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2101257": {"id": 2101257, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101256",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2101595": {"id": 2101595, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101594",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2101597": {"id": 2101597, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101596",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] },
"2104032": {"id": 2104032, "used": 0.02, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2104031",
"sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0,
"latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] } },
"cpu": { "topology": { "core": 2, "llc": 12, "package": 12},
"used": [ 9.26, 5.53, 3.48, 6.41, 2.46, 6.57, 5.46, 4.98, 7.04, 3.11, 4.47, 5.17, 63.92],
"wdt": [ 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00],
"sys": [ 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00],
"vcpu": [ 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00]},
"overhead": {"vcpu": [ "3550389" ] } }
] }
Now we have to check what kind of threads are these ones
{"name": "vmnic0", "switch": "DvsPortset-1", "id": 33554434, "mac": "90:b1:1c:13:fc:14", "rxmode": 0, "tunemode": 2, "uplink": "true",
"ens": false,
"txpps": 79, "txmbps": 0.9, "txsize": 1446, "txeps": 0.00, "rxpps": 80, "rxmbps": 1.2, "rxsize": 1881, "rxeps": 0.00,
"vmnic": {"devname": "vmnic0.ntg3",
"txpps": 79, "txmbps": 0.9, "txsize": 1500, "txeps": 0.00, "rxpps": 80, "rxmbps": 1.3, "rxsize": 1959, "rxeps": 0.00 },
"sys": [ "2097643", "2097863", "2097864" ]},
Use VSISH
[root@esx21:~] vsish
and display names of particular threads to identify if it is a network pool. In the output below we see
only one network pool used for vmnic0.
/> cat /world/2097643/name
vmnic0-pollWorld-0
/> cat /world/2097863/name
hclk-sched-vmnic0
/> cat /world/2097864/name
hclk-watchdog-vmnic0
Here we see only one poolWorld (Rx Thread) because physical NIC does not support
VMDq/NetQueue.
If the NetQueue feature works properly we should see multiple software threads for VMNIC having
multiple hardware queues. This is the output from ESXi host having Intel X710, where we have 16
NetPools (RX Threads).
[root@czchoes595:~] vsish
/> cat /world/2098742/name
vmnic1-pollWorld-0
/> cat /world/2098743/name
vmnic1-pollWorld-1
/> cat /world/2098744/name
vmnic1-pollWorld-2
/> cat /world/2098744/name
vmnic1-pollWorld-2
/> cat /world/2098745/name
vmnic1-pollWorld-3
/> cat /world/2098746/name
vmnic1-pollWorld-4
/> cat /world/2098747/name
vmnic1-pollWorld-5
/> cat /world/2098748/name
vmnic1-pollWorld-6
/> cat /world/2098749/name
vmnic1-pollWorld-7
/> cat /world/2098750/name
vmnic1-pollWorld-8
/> cat /world/2098751/name
vmnic1-pollWorld-9
/> cat /world/2098752/name
vmnic1-pollWorld-10
/> cat /world/2098753/name
vmnic1-pollWorld-11
/> cat /world/2098754/name
vmnic1-pollWorld-12
/> cat /world/2098755/name
vmnic1-pollWorld-13
/> cat /world/2098756/name
vmnic1-pollWorld-14
/> cat /world/2098757/name
vmnic1-pollWorld-15
/> cat /world/2099014/name
hclk-sched-vmnic1
/> cat /world/2099015/name
hclk-watchdog-vmnic1
Diagnostic commands
In this section we will document diagnostic commands which should be run on each system to
understand implementation details of NIC offload capabilities and network traffic queueing.
ESXCLI commands are available at ESXCLI documentation:
https://code.vmware.com/docs/11743/esxi-7-0-esxcli-command-
reference/namespace/esxcli_network.html
For further detail about diagnostic commands, you can watch vmkernel log during execution of
commands below as there can be interesting outputs from NIC driver.
tail -f /var/log/vmkernel.log
Intel X710 diagnostic command outputs
ESXi Inventory - Intel details
Collect hardware and ESXi inventory details.
esxcli system version get
esxcli hardware platform get
esxcli hardware cpu global get
smbiosDump
WebBrowser https://192.168.4.121/cgi-bin/esxcfg-info.cgi
HPE ProLiant DL560 Gen10 | BIOS: U34 | Date (ISO-8601): 2020-04-08
VMware ESXi 6.7.0 build-16075168 (6.7 U3)
NIC Model:
Intel(R) Ethernet Controller X710 for 10GbE SFP+
2 NICs (vmnic1, vmnic3) in UP state
Driver information
NIC inventory
esxcli network nic get -n <VMNIC>
[root@czchoes595:~] esxcli network nic get -n vmnic1
Advertised Auto Negotiation: true
Advertised Link Modes: Auto, 10000BaseSR/Full
Auto Negotiation: true
Cable Type: FIBRE
Current Message Level: 0
Driver Info:
Bus Info: 0000:11:00:0
Driver: i40en
Firmware Version: 10.51.5
Version: 1.9.5
Link Detected: true
Link Status: Up
Name: vmnic1
PHYAddress: 0
Pause Autonegotiate: false
Pause RX: false
Pause TX: false
Supported Ports: FIBRE
Supports Auto Negotiation: true
Supports Pause: true
Supports Wakeon: false
Transceiver:
Virtual Address: 00:50:56:57:f7:b4
Wakeon: None
NIC device info
vmkchdev –l | grep vmnic
VID DID SVID SDID
8086 1572 103c 22fd
To list all vib modules and understand what drivers are “Inbox” (aka native VMware) or “Async”
(from partners like Intel or Marvel/QLogic)
esxcli software vib list
[root@czchoes595:~] esxcli software vib list
Name Version Vendor Acceptance Level Install Date
----------------------------- ---------------------------------- --------- ---------------- ------------
lsi-mr3 7.706.08.00-1OEM.670.0.0.8169922 Avago VMwareCertified 2020-09-15
bnxtnet 214.0.230.0-1OEM.670.0.0.8169922 BCM VMwareCertified 2020-09-15
bnxtroce 214.0.187.0-1OEM.670.0.0.8169922 BCM VMwareCertified 2020-09-15
elx-esx-libelxima-8169922.so 12.0.1188.0-03 ELX VMwareCertified 2020-09-15
brcmfcoe 12.0.1278.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15
elxiscsi 12.0.1188.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15
elxnet 12.0.1216.4-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15
lpfc 12.4.270.6-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15
amsd 670.11.5.0-16.7535516 HPE PartnerSupported 2020-09-15
bootcfg 6.7.0.02-06.00.14.7535516 HPE PartnerSupported 2020-09-15
conrep 6.7.0.03-04.00.34.7535516 HPE PartnerSupported 2020-09-15
cru 670.6.7.10.14-1OEM.670.0.0.7535516 HPE PartnerSupported 2020-09-15
fc-enablement 670.3.50.16-7535516 HPE PartnerSupported 2020-09-15
hponcfg 6.7.0.5.5-0.18.7535516 HPE PartnerSupported 2020-09-15
ilo 670.10.2.0.2-1OEM.670.0.0.7535516 HPE PartnerSupported 2020-09-15
oem-build 670.U3.10.5.5-7535516 HPE PartnerSupported 2020-09-15
scsi-hpdsa 5.5.0.68-1OEM.550.0.0.1331820 HPE PartnerSupported 2020-09-15
smx-provider 670.03.16.00.3-7535516 HPE VMwareAccepted 2020-09-15
ssacli 4.17.6.0-6.7.0.7535516.hpe HPE PartnerSupported 2020-09-15
sut 6.7.0.2.5.0.0-83 HPE PartnerSupported 2020-09-15
testevent 6.7.0.02-01.00.12.7535516 HPE PartnerSupported 2020-09-15
i40en 1.9.5-1OEM.670.0.0.8169922 INT VMwareCertified 2020-09-15
igbn 1.4.10-1OEM.670.0.0.8169922 INT VMwareCertified 2020-09-15
ixgben 1.7.20-1OEM.670.0.0.8169922 INT VMwareCertified 2020-09-15
nmlx5-core 4.17.15.16-1OEM.670.0.0.8169922 MEL VMwareCertified 2020-09-15
nmlx5-rdma 4.17.15.16-1OEM.670.0.0.8169922 MEL VMwareCertified 2020-09-15
nmst 4.12.0.105-1OEM.650.0.0.4598673 MEL PartnerSupported 2020-09-15
smartpqi 1.0.4.3008-1OEM.670.0.0.8169922 MSCC VMwareCertified 2020-09-15
nhpsa 2.0.44-1OEM.670.0.0.8169922 Microsemi VMwareCertified 2020-09-15
qcnic 1.0.27.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15
qedentv 3.11.16.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15
qedf 1.3.41.0-1OEM.600.0.0.2768847 QLC VMwareCertified 2020-09-15
qedi 2.10.19.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15
qedrntv 3.11.16.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15
qfle3 1.0.87.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15
qfle3f 1.0.75.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15
qfle3i 1.0.25.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15
qlnativefc 2.1.81.0-1OEM.600.0.0.2768847 QLogic VMwareCertified 2020-09-16
ata-libata-92 3.00.9.2-16vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ata-pata-amd 0.3.10-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ata-pata-atiixp 0.4.6-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ata-pata-cmd64x 0.2.5-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ata-pata-hpt3x2n 0.3.4-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ata-pata-pdc2027x 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ata-pata-serverworks 0.4.3-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ata-pata-sil680 0.4.8-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ata-pata-via 0.3.3-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
block-cciss 3.6.14-10vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
char-random 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ehci-ehci-hcd 1.0-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
hid-hid 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
iavmd 1.2.0.1011-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ima-qla4xxx 2.02.18-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
ipmi-ipmi-devintf 39.1-5vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15
ipmi-ipmi-msghandler 39.1-5vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15
ipmi-ipmi-si-drv 39.1-5vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15
iser 1.0.0.0-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15
lpnic 11.4.59.0-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
lsi-msgpt2 20.00.06.00-2vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15
lsi-msgpt35 09.00.00.00-5vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15
lsi-msgpt3 17.00.02.00-1vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15
misc-drivers 6.7.0-2.48.13006603 VMW VMwareCertified 2020-09-15
mtip32xx-native 3.9.8-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15
ne1000 0.8.4-2vmw.670.2.48.13006603 VMW VMwareCertified 2020-09-15
nenic 1.0.29.0-1vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15
net-cdc-ether 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-e1000 8.0.3.1-5vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-e1000e 3.2.2.1-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-enic 2.1.2.38-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-fcoe 1.0.29.9.3-7vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-forcedeth 0.61-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-libfcoe-92 1.0.24.9.4-8vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-mlx4-core 1.9.7.0-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-mlx4-en 1.9.7.0-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-nx-nic 5.0.621-5vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-tg3 3.131d.v60.4-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-usbnet 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
net-vmxnet3 1.1.3.0-3vmw.670.3.104.16075168 VMW VMwareCertified 2020-09-16
nfnic 4.0.0.44-0vmw.670.3.104.16075168 VMW VMwareCertified 2020-09-16
nmlx4-core 3.17.13.1-1vmw.670.2.48.13006603 VMW VMwareCertified 2020-09-15
nmlx4-en 3.17.13.1-1vmw.670.2.48.13006603 VMW VMwareCertified 2020-09-15
nmlx4-rdma 3.17.13.1-1vmw.670.2.48.13006603 VMW VMwareCertified 2020-09-15
ntg3 4.1.3.2-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15
nvme 1.2.2.28-1vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15
nvmxnet3-ens 2.0.0.21-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
nvmxnet3 2.0.0.29-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15
ohci-usb-ohci 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
pvscsi 0.1-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
qflge 1.1.0.11-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
sata-ahci 3.0-26vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
sata-ata-piix 2.12-10vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
sata-sata-nv 3.5-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
sata-sata-promise 2.12-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
sata-sata-sil24 1.1-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
sata-sata-sil 2.3-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
sata-sata-svw 2.3-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-aacraid 1.1.5.1-9vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-adp94xx 1.0.8.12-6vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-aic79xx 3.1-6vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-fnic 1.5.0.45-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-ips 7.12.05-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-iscsi-linux-92 1.0.0.2-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-libfc-92 1.0.40.9.3-5vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-megaraid-mbox 2.20.5.1-6vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-megaraid-sas 6.603.55.00-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-megaraid2 2.00.4-9vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-mpt2sas 19.00.00.00-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-mptsas 4.23.01.00-10vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-mptspi 4.23.01.00-10vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
scsi-qla4xxx 5.01.03.2-7vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
sfvmk 1.0.0.1003-7vmw.670.3.104.16075168 VMW VMwareCertified 2020-09-16
shim-iscsi-linux-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-iscsi-linux-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-libata-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-libata-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-libfc-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-libfc-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-libfcoe-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-libfcoe-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-vmklinux-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-vmklinux-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
shim-vmklinux-9-2-3-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
uhci-usb-uhci 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
usb-storage-usb-storage 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
usbcore-usb 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
vmkata 0.1-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
vmkfcoe 1.0.0.1-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15
vmkplexer-vmkplexer 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15
vmkusb 0.1-1vmw.670.3.104.16075168 VMW VMwareCertified 2020-09-16
vmw-ahci 1.2.8-1vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15
xhci-xhci 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
cpu-microcode 6.7.0-3.77.15018017 VMware VMwareCertified 2020-09-15
esx-base 6.7.0-3.104.16075168 VMware VMwareCertified 2020-09-16
esx-dvfilter-generic-fastpath 6.7.0-0.0.8169922 VMware VMwareCertified 2020-09-15
esx-ui 1.33.7-15803439 VMware VMwareCertified 2020-09-16
esx-update 6.7.0-3.104.16075168 VMware VMwareCertified 2020-09-16
esx-xserver 6.7.0-3.73.14320388 VMware VMwareCertified 2020-09-15
lsu-hp-hpsa-plugin 2.0.0-16vmw.670.1.28.10302608 VMware VMwareCertified 2020-09-15
lsu-intel-vmd-plugin 1.0.0-2vmw.670.1.28.10302608 VMware VMwareCertified 2020-09-15
lsu-lsi-drivers-plugin 1.0.0-1vmw.670.2.48.13006603 VMware VMwareCertified 2020-09-15
lsu-lsi-lsi-mr3-plugin 1.0.0-13vmw.670.1.28.10302608 VMware VMwareCertified 2020-09-15
lsu-lsi-lsi-msgpt3-plugin 1.0.0-9vmw.670.2.48.13006603 VMware VMwareCertified 2020-09-15
lsu-lsi-megaraid-sas-plugin 1.0.0-9vmw.670.0.0.8169922 VMware VMwareCertified 2020-09-15
lsu-lsi-mpt2sas-plugin 2.0.0-7vmw.670.0.0.8169922 VMware VMwareCertified 2020-09-15
lsu-smartpqi-plugin 1.0.0-3vmw.670.1.28.10302608 VMware VMwareCertified 2020-09-15
native-misc-drivers 6.7.0-3.89.15160138 VMware VMwareCertified 2020-09-15
rste 2.0.2.0088-7vmw.670.0.0.8169922 VMware VMwareCertified 2020-09-15
vmware-esx-esxcli-nvme-plugin 1.2.0.36-2.48.13006603 VMware VMwareCertified 2020-09-15
vmware-fdm 6.7.0-16708996 VMware VMwareCertified 2020-09-20
vsan 6.7.0-3.104.15985001 VMware VMwareCertified 2020-09-16
vsanhealth 6.7.0-3.104.15984994 VMware VMwareCertified 2020-09-16
tools-light 11.0.5.15389592-15999342 VMware VMwareCertified 2020-09-16
Driver module settings
Identify NIC driver module name
esxcli network nic get -n vmnic0
Show driver module parameters
esxcli system module parameters list -m <DRIVER-MODULE-NAME>
[root@czchoes595:~] esxcli system module parameters list -m i40en
Name Type Value Description
------------- ------------ ----- ----------------------------------------------------------------------------------
EEE array of int Energy Efficient Ethernet feature (EEE): 0 = disable, 1 = enable, (default = 1)
LLDP array of int Link Layer Discovery Protocol (LLDP) agent: 0 = disable, 1 = enable, (default = 1)
RxITR int Default RX interrupt interval (0..0xFFF), in microseconds (default = 50)
TxITR int Default TX interrupt interval (0..0xFFF), in microseconds, (default = 100)
VMDQ array of int Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default =8)
max_vfs array of int Maximum number of VFs to be enabled (0..128)
trust_all_vfs array of int Always set all VFs to trusted mode 0 = disable (default), other = enable
TSO
To verify that your pNIC supports TSO and if it is enabled on your ESXi host
esxcli network nic tso get
[root@czchoes595:~] esxcli network nic tso get
NIC Value
------ -----
vmnic0 on
vmnic1 on
vmnic2 on
vmnic3 on
LRO
To display the current LRO configuration values
esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
[root@czchoes595:~] esxcli system settings advanced list -o /Net/TcpipDefLROEnabled
Path: /Net/TcpipDefLROEnabled
Type: integer
Int Value: 1
Default Int Value: 1
Min Value: 0
Max Value: 1
String Value:
Default String Value:
Valid Characters:
Description: LRO enabled for TCP/IP
Check the length of the LRO buffer by using the following esxcli command:
esxcli system settings advanced list - o /Net/VmxnetLROMaxLength
[root@czchoes595:~] esxcli system settings advanced list -o /Net/VmxnetLROMaxLength
Path: /Net/VmxnetLROMaxLength
Type: integer
Int Value: 32000
Default Int Value: 32000
Min Value: 1
Max Value: 65535
String Value:
Default String Value:
Valid Characters:
Description: LRO default max length for TCP/IP
To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO,
software LRO) can be issued:
esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
[root@czchoes595:~] esxcli system settings advanced list -o /Net/Vmxnet3HwLRO
Path: /Net/Vmxnet3HwLRO
Type: integer
Int Value: 1
Default Int Value: 1
Min Value: 0
Max Value: 1
String Value:
Default String Value:
Valid Characters:
Description: Whether to enable HW LRO on pkts going to a LPD capable vmxnet3
esxcli system settings advanced list -o /Net/Vmxnet3SwLRO
[root@czchoes595:~] esxcli system settings advanced list -o /Net/Vmxnet3SwLRO
Path: /Net/Vmxnet3SwLRO
Type: integer
Int Value: 1
Default Int Value: 1
Min Value: 0
Max Value: 1
String Value:
Default String Value:
Valid Characters:
Description: Whether to perform SW LRO on pkts going to a LPD capable vmxnet3
CSO (Checksum Offload)
To verify that your pNIC supports Checksum Offload (CSO) on your ESXi host
esxcli network nic cso get
[root@czchoes595:~] esxcli network nic cso get
NIC RX Checksum Offload TX Checksum Offload
------ ------------------- -------------------
vmnic0 on on
vmnic1 on on
vmnic2 on on
vmnic3 on on
Net Queue Support
Get netqueue support on VMkernel
esxcli system settings kernel list | grep netNetqueueEnabled
Net Queue Count
Get netqueue count on a nic
esxcli network nic queue count get
[root@czchoes595:~] esxcli network nic queue count get
NIC Tx netqueue count Rx netqueue count
------ ----------------- -----------------
vmnic0 0 0
vmnic1 8 8
vmnic2 0 0
vmnic3 8 8
Net Filter Classes
List the netqueue supported filterclass of all physical NICs currently installed and loaded on the
system.
esxcli network nic queue filterclass list
[root@czchoes595:~] esxcli network nic queue filterclass list
NIC MacOnly VlanOnly VlanMac Vxlan Geneve GenericEncap
------ ------- -------- ------- ----- ------ ------------
vmnic0 false false false false false false
vmnic1 true true true true true false
vmnic2 false false false false false false
vmnic3 true true true true true false
List the load balancer settings
List the load balancer settings of all the installed and loaded physical NICs. (S:supported,
U:unsupported, N:not-applicable, A:allowed, D:disallowed).
esxcli network nic queue loadbalancer list
[root@czchoes595:~] esxcli network nic queue loadbalancer list
NIC RxQPair RxQNoFeature PreEmptibleQ RxQLatency RxDynamicLB DynamicQPool MacLearnLB RSS LRO GeneveOAM
------ ------- ------------ ------------ ---------- ----------- ------------ ---------- --- --- ---------
vmnic0 UA ND UA UA NA UA NA UA UA UA
vmnic1 SA ND UA UA NA SA NA UA UA UA
vmnic2 UA ND UA UA NA UA NA UA UA UA
vmnic3 SA ND UA UA NA SA NA UA UA UA
Details of netqueue balancer plugins
Details of netqueue balancer plugins on all physical NICs currently installed and loaded on the
system
esxcli network nic queue loadbalancer plugin list
[root@czchoes595:~] esxcli network nic queue loadbalancer plugin list
NIC Module Name Plugin Name Enabled Description
------ -------------- --------------------- ------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------
vmnic0 load-based-bal filter-packer true Perform packing of filters from two queues
vmnic0 load-based-bal queue-allocator true Allocates Rx queue with best feature for loaded filter
vmnic0 load-based-bal filter-unpacker true Perform unpacking of filters from saturated queues
vmnic0 load-based-bal filter-equalizer true Distribute filters between two queues for better fairness
vmnic0 load-based-bal rssflow-mapper true Dynamically map flows in indirection table of RSS engine
vmnic0 load-based-bal netpoll-affinitizer true Affinitize Rx queue netpoll to highly loaded filter
vmnic0 load-based-bal geneveoam-allocator true Allocate geneve-oam queue based on applied filters
vmnic0 load-based-bal numaaware-affinitizer true Numa aware filter placement on Rx queue and dynamically affinitize Rx queue netpoll to device numa
vmnic1 load-based-bal filter-packer true Perform packing of filters from two queues
vmnic1 load-based-bal queue-allocator true Allocates Rx queue with best feature for loaded filter
vmnic1 load-based-bal filter-unpacker true Perform unpacking of filters from saturated queues
vmnic1 load-based-bal filter-equalizer true Distribute filters between two queues for better fairness
vmnic1 load-based-bal rssflow-mapper true Dynamically map flows in indirection table of RSS engine
vmnic1 load-based-bal netpoll-affinitizer true Affinitize Rx queue netpoll to highly loaded filter
vmnic1 load-based-bal geneveoam-allocator true Allocate geneve-oam queue based on applied filters
vmnic1 load-based-bal numaaware-affinitizer true Numa aware filter placement on Rx queue and dynamically affinitize Rx queue netpoll to device numa
vmnic2 load-based-bal filter-packer true Perform packing of filters from two queues
vmnic2 load-based-bal queue-allocator true Allocates Rx queue with best feature for loaded filter
vmnic2 load-based-bal filter-unpacker true Perform unpacking of filters from saturated queues
vmnic2 load-based-bal filter-equalizer true Distribute filters between two queues for better fairness
vmnic2 load-based-bal rssflow-mapper true Dynamically map flows in indirection table of RSS engine
vmnic2 load-based-bal netpoll-affinitizer true Affinitize Rx queue netpoll to highly loaded filter
vmnic2 load-based-bal geneveoam-allocator true Allocate geneve-oam queue based on applied filters
vmnic2 load-based-bal numaaware-affinitizer true Numa aware filter placement on Rx queue and dynamically affinitize Rx queue netpoll to device numa
vmnic3 load-based-bal filter-packer true Perform packing of filters from two queues
vmnic3 load-based-bal queue-allocator true Allocates Rx queue with best feature for loaded filter
vmnic3 load-based-bal filter-unpacker true Perform unpacking of filters from saturated queues
vmnic3 load-based-bal filter-equalizer true Distribute filters between two queues for better fairness
vmnic3 load-based-bal rssflow-mapper true Dynamically map flows in indirection table of RSS engine
vmnic3 load-based-bal netpoll-affinitizer true Affinitize Rx queue netpoll to highly loaded filter
vmnic3 load-based-bal geneveoam-allocator true Allocate geneve-oam queue based on applied filters
vmnic3 load-based-bal numaaware-affinitizer true Numa aware filter placement on Rx queue and dynamically affinitize Rx queue netpoll to device numa
Net Queue balancer state
Netqueue balancer state of all physical NICs currently installed and loaded on the system
esxcli network nic queue loadbalancer state list
[root@czchoes595:~] esxcli network nic queue loadbalancer state list
NIC Enabled
------ -------
vmnic0 true
vmnic1 true
vmnic2 true
vmnic3 true
RX/TX ring buffer current parameters
Get current RX/TX ring buffer parameters of a NIC
esxcli network nic ring current get
[root@czchoes595:~] esxcli network nic ring current get -n vmnic0
RX: 1024
RX Mini: 0
RX Jumbo: 0
TX: 1024
RX/TX ring buffer parameters max values
Get preset maximums for RX/TX ring buffer parameters of a NIC.
esxcli network nic ring preset get -n vmnic0
[root@czchoes595:~] esxcli network nic ring preset get -n vmnic0
RX: 4096
RX Mini: 0
RX Jumbo: 0
TX: 4096
SG (Scatter and Gather)
Scatter and Gather (Vectored I/O) is a concept that was primarily used in hard disks and it enhances
large I/O request performance, if supported by the hardware.
esxcli network nic sg get
[root@czchoes595:~] esxcli network nic sg get
NIC Value
------ -----
vmnic0 on
vmnic1 on
vmnic2 on
vmnic3 on
List software simulation settings
List software simulation settings of physical NICs currently installed and loaded on the system.
esxcli network nic software list
[root@czchoes595:~] esxcli network nic software list
NIC IPv4 CSO IPv4 TSO Scatter Gather Offset Based Offload VXLAN Encap Geneve Offload IPv6
TSO IPv6 TSO Ext IPv6 CSO IPv6 CSO Ext High DMA Scatter Gather MP VLAN Tagging VLAN
Untagging
------ -------- -------- -------------- -------------------- ----------- -------------- -------- ------------ -------- ----------
-- -------- ----------------- ------------ --------------
vmnic0 off off off off off off off off off off off
off off off
vmnic1 off off off off off off off off off off off
off off off
vmnic2 off off off off off off off off off off off
off off off
vmnic3 off off off off off off off off off off off
off off off
[root@czchoes595:~]
RSS
We do not see any RSS related driver parameters, therefore, driver i40en 1.9.5 does not support
RSS.
On top of that, we have been assured by VMware Engineering that inbox driver i40en 1.9.5 does not
support RSS.
VMkernel software treads per VMNIC
Show number of VMkernel software treads per VMNIC
net-stats -A -t vW
vsish
/> cat /world/<WORLD-ID-1-IN-VMNIC>/name
/> cat /world/<WORLD-ID-2-IN-VMNIC>/name
/> cat /world/<WORLD-ID-3-IN-VMNIC>/name
…
/> cat /world/<WORLD-ID-n-IN-VMNIC>/name
VMNIC1
[root@czchoes595:~] vsish
/> cat /world/2098742/name
vmnic1-pollWorld-0
/> cat /world/2098743/name
vmnic1-pollWorld-1
/> cat /world/2098744/name
vmnic1-pollWorld-2
/> cat /world/2098744/name
vmnic1-pollWorld-2
/> cat /world/2098745/name
vmnic1-pollWorld-3
/> cat /world/2098746/name
vmnic1-pollWorld-4
/> cat /world/2098747/name
vmnic1-pollWorld-5
/> cat /world/2098748/name
vmnic1-pollWorld-6
/> cat /world/2098749/name
vmnic1-pollWorld-7
/> cat /world/2098750/name
vmnic1-pollWorld-8
/> cat /world/2098751/name
vmnic1-pollWorld-9
/> cat /world/2098752/name
vmnic1-pollWorld-10
/> cat /world/2098753/name
vmnic1-pollWorld-11
/> cat /world/2098754/name
vmnic1-pollWorld-12
/> cat /world/2098755/name
vmnic1-pollWorld-13
/> cat /world/2098756/name
vmnic1-pollWorld-14
/> cat /world/2098757/name
vmnic1-pollWorld-15
/> cat /world/2099014/name
hclk-sched-vmnic1
/> cat /world/2099015/name
hclk-watchdog-vmnic1
VMNIC3
/> cat /world/2098789/name
vmnic3-pollWorld-0
/> cat /world/2098790/name
vmnic3-pollWorld-1
/> cat /world/2098791/name
vmnic3-pollWorld-2
/> cat /world/2098792/name
vmnic3-pollWorld-3
/> cat /world/2098793/name
vmnic3-pollWorld-4
/> cat /world/2098794/name
vmnic3-pollWorld-5
/> cat /world/2098795/name
vmnic3-pollWorld-6
/> cat /world/2098796/name
vmnic3-pollWorld-7
/> cat /world/2098797/name
vmnic3-pollWorld-8
/> cat /world/2098798/name
vmnic3-pollWorld-9
/> cat /world/2098799/name
vmnic3-pollWorld-10
/> cat /world/2098800/name
vmnic3-pollWorld-11
/> cat /world/2098801/name
vmnic3-pollWorld-12
/> cat /world/2098802/name
vmnic3-pollWorld-13
/> cat /world/2098803/name
vmnic3-pollWorld-14
/> cat /world/2098804/name
vmnic3-pollWorld-15
/> cat /world/2099003/name
hclk-sched-vmnic3
/> cat /world/2099004/name
hclk-watchdog-vmnic3
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6
VMware ESXi - Intel and Qlogic NIC throughput difference v0.6

Mais conteúdo relacionado

Mais procurados

Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017 Karan Singh
 
Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVMPradeep Kumar
 
Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVM Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVM ShapeBlue
 
High-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringHigh-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringScyllaDB
 
Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)Novell
 
Understanding kube proxy in ipvs mode
Understanding kube proxy in ipvs modeUnderstanding kube proxy in ipvs mode
Understanding kube proxy in ipvs modeVictor Morales
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화OpenStack Korea Community
 
Virtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareVirtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareDuncan Epping
 
Virtualized network with openvswitch
Virtualized network with openvswitchVirtualized network with openvswitch
Virtualized network with openvswitchSim Janghoon
 
SR-IOV+KVM on Debian/Stable
SR-IOV+KVM on Debian/StableSR-IOV+KVM on Debian/Stable
SR-IOV+KVM on Debian/Stablejuet-y
 
Accelerating Envoy and Istio with Cilium and the Linux Kernel
Accelerating Envoy and Istio with Cilium and the Linux KernelAccelerating Envoy and Istio with Cilium and the Linux Kernel
Accelerating Envoy and Istio with Cilium and the Linux KernelThomas Graf
 
DPDK in Containers Hands-on Lab
DPDK in Containers Hands-on LabDPDK in Containers Hands-on Lab
DPDK in Containers Hands-on LabMichelle Holley
 
VLANs in the Linux Kernel
VLANs in the Linux KernelVLANs in the Linux Kernel
VLANs in the Linux KernelKernel TLV
 
VMworld 2013: ESXi Native Networking Driver Model - Delivering on Simplicity ...
VMworld 2013: ESXi Native Networking Driver Model - Delivering on Simplicity ...VMworld 2013: ESXi Native Networking Driver Model - Delivering on Simplicity ...
VMworld 2013: ESXi Native Networking Driver Model - Delivering on Simplicity ...VMworld
 
[242]open stack neutron dataplane 구현
[242]open stack neutron   dataplane 구현[242]open stack neutron   dataplane 구현
[242]open stack neutron dataplane 구현NAVER D2
 
OVN - Basics and deep dive
OVN - Basics and deep diveOVN - Basics and deep dive
OVN - Basics and deep diveTrinath Somanchi
 
Open vSwitch 패킷 처리 구조
Open vSwitch 패킷 처리 구조Open vSwitch 패킷 처리 구조
Open vSwitch 패킷 처리 구조Seung-Hoon Baek
 
Openstack zun,virtual kubelet
Openstack zun,virtual kubeletOpenstack zun,virtual kubelet
Openstack zun,virtual kubeletChanyeol yoon
 
Virtualization - Kernel Virtual Machine (KVM)
Virtualization - Kernel Virtual Machine (KVM)Virtualization - Kernel Virtual Machine (KVM)
Virtualization - Kernel Virtual Machine (KVM)Wan Leung Wong
 

Mais procurados (20)

Ceph Introduction 2017
Ceph Introduction 2017  Ceph Introduction 2017
Ceph Introduction 2017
 
Virtualization Architecture & KVM
Virtualization Architecture & KVMVirtualization Architecture & KVM
Virtualization Architecture & KVM
 
Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVM Achieving the ultimate performance with KVM
Achieving the ultimate performance with KVM
 
High-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uringHigh-Performance Networking Using eBPF, XDP, and io_uring
High-Performance Networking Using eBPF, XDP, and io_uring
 
Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)Virtualization with KVM (Kernel-based Virtual Machine)
Virtualization with KVM (Kernel-based Virtual Machine)
 
Understanding kube proxy in ipvs mode
Understanding kube proxy in ipvs modeUnderstanding kube proxy in ipvs mode
Understanding kube proxy in ipvs mode
 
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
 
Virtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure softwareVirtual SAN 6.2, hyper-converged infrastructure software
Virtual SAN 6.2, hyper-converged infrastructure software
 
Virtualized network with openvswitch
Virtualized network with openvswitchVirtualized network with openvswitch
Virtualized network with openvswitch
 
SR-IOV+KVM on Debian/Stable
SR-IOV+KVM on Debian/StableSR-IOV+KVM on Debian/Stable
SR-IOV+KVM on Debian/Stable
 
Accelerating Envoy and Istio with Cilium and the Linux Kernel
Accelerating Envoy and Istio with Cilium and the Linux KernelAccelerating Envoy and Istio with Cilium and the Linux Kernel
Accelerating Envoy and Istio with Cilium and the Linux Kernel
 
DPDK in Containers Hands-on Lab
DPDK in Containers Hands-on LabDPDK in Containers Hands-on Lab
DPDK in Containers Hands-on Lab
 
VLANs in the Linux Kernel
VLANs in the Linux KernelVLANs in the Linux Kernel
VLANs in the Linux Kernel
 
VMworld 2013: ESXi Native Networking Driver Model - Delivering on Simplicity ...
VMworld 2013: ESXi Native Networking Driver Model - Delivering on Simplicity ...VMworld 2013: ESXi Native Networking Driver Model - Delivering on Simplicity ...
VMworld 2013: ESXi Native Networking Driver Model - Delivering on Simplicity ...
 
[242]open stack neutron dataplane 구현
[242]open stack neutron   dataplane 구현[242]open stack neutron   dataplane 구현
[242]open stack neutron dataplane 구현
 
OVN - Basics and deep dive
OVN - Basics and deep diveOVN - Basics and deep dive
OVN - Basics and deep dive
 
Open vSwitch 패킷 처리 구조
Open vSwitch 패킷 처리 구조Open vSwitch 패킷 처리 구조
Open vSwitch 패킷 처리 구조
 
Openstack zun,virtual kubelet
Openstack zun,virtual kubeletOpenstack zun,virtual kubelet
Openstack zun,virtual kubelet
 
Ceph issue 해결 사례
Ceph issue 해결 사례Ceph issue 해결 사례
Ceph issue 해결 사례
 
Virtualization - Kernel Virtual Machine (KVM)
Virtualization - Kernel Virtual Machine (KVM)Virtualization - Kernel Virtual Machine (KVM)
Virtualization - Kernel Virtual Machine (KVM)
 

Semelhante a VMware ESXi - Intel and Qlogic NIC throughput difference v0.6

Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpPushing Packets - How do the ML2 Mechanism Drivers Stack Up
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpJames Denton
 
Network performance test plan_v0.3
Network performance test plan_v0.3Network performance test plan_v0.3
Network performance test plan_v0.3David Pasek
 
Netsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvNetsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvIntel
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld
 
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...Jim St. Leger
 
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...PROIDEA
 
Cisco data center support
Cisco data center supportCisco data center support
Cisco data center supportKrunal Shah
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualizationSDN Hub
 
Using Agilio SmartNICs for OpenStack Networking Acceleration
Using Agilio SmartNICs for OpenStack Networking AccelerationUsing Agilio SmartNICs for OpenStack Networking Acceleration
Using Agilio SmartNICs for OpenStack Networking AccelerationNetronome
 
DragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutronDragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutronEran Gampel
 
(SDD422) Amazon VPC Deep Dive | AWS re:Invent 2014
(SDD422) Amazon VPC Deep Dive | AWS re:Invent 2014(SDD422) Amazon VPC Deep Dive | AWS re:Invent 2014
(SDD422) Amazon VPC Deep Dive | AWS re:Invent 2014Amazon Web Services
 
Data path-acceleration-techniques-in-a-nfv-world
Data path-acceleration-techniques-in-a-nfv-worldData path-acceleration-techniques-in-a-nfv-world
Data path-acceleration-techniques-in-a-nfv-worldHappiest Minds Technologies
 
VMware vSphere 4.1 deep dive - part 2
VMware vSphere 4.1 deep dive - part 2VMware vSphere 4.1 deep dive - part 2
VMware vSphere 4.1 deep dive - part 2Louis Göhl
 
An Introduce of OPNFV (Open Platform for NFV)
An Introduce of OPNFV (Open Platform for NFV)An Introduce of OPNFV (Open Platform for NFV)
An Introduce of OPNFV (Open Platform for NFV)Mario Cho
 
EYWA Presentation v0.1.27
EYWA Presentation v0.1.27EYWA Presentation v0.1.27
EYWA Presentation v0.1.27JungIn Jung
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackBoden Russell
 
Installation of pfSense on Soekris 6501
Installation of pfSense on Soekris 6501Installation of pfSense on Soekris 6501
Installation of pfSense on Soekris 6501robertguerra
 
Installation of pfSense on Soekris 6501
Installation of pfSense on Soekris 6501Installation of pfSense on Soekris 6501
Installation of pfSense on Soekris 6501robertguerra
 

Semelhante a VMware ESXi - Intel and Qlogic NIC throughput difference v0.6 (20)

Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
Pushing Packets - How do the ML2 Mechanism Drivers Stack UpPushing Packets - How do the ML2 Mechanism Drivers Stack Up
Pushing Packets - How do the ML2 Mechanism Drivers Stack Up
 
Network performance test plan_v0.3
Network performance test plan_v0.3Network performance test plan_v0.3
Network performance test plan_v0.3
 
Netsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfvNetsft2017 day in_life_of_nfv
Netsft2017 day in_life_of_nfv
 
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
 
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
DPDK Summit - 08 Sept 2014 - Futurewei - Jun Xu - Revisit the IP Stack in Lin...
 
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
PLNOG16: VXLAN Gateway, efektywny sposób połączenia świata wirtualnego z fizy...
 
Simplify Networking for Containers
Simplify Networking for ContainersSimplify Networking for Containers
Simplify Networking for Containers
 
Cisco data center support
Cisco data center supportCisco data center support
Cisco data center support
 
Understanding network and service virtualization
Understanding network and service virtualizationUnderstanding network and service virtualization
Understanding network and service virtualization
 
Using Agilio SmartNICs for OpenStack Networking Acceleration
Using Agilio SmartNICs for OpenStack Networking AccelerationUsing Agilio SmartNICs for OpenStack Networking Acceleration
Using Agilio SmartNICs for OpenStack Networking Acceleration
 
DragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutronDragonFlow sdn based distributed virtual router for openstack neutron
DragonFlow sdn based distributed virtual router for openstack neutron
 
Demystifying openvswitch
Demystifying openvswitchDemystifying openvswitch
Demystifying openvswitch
 
(SDD422) Amazon VPC Deep Dive | AWS re:Invent 2014
(SDD422) Amazon VPC Deep Dive | AWS re:Invent 2014(SDD422) Amazon VPC Deep Dive | AWS re:Invent 2014
(SDD422) Amazon VPC Deep Dive | AWS re:Invent 2014
 
Data path-acceleration-techniques-in-a-nfv-world
Data path-acceleration-techniques-in-a-nfv-worldData path-acceleration-techniques-in-a-nfv-world
Data path-acceleration-techniques-in-a-nfv-world
 
VMware vSphere 4.1 deep dive - part 2
VMware vSphere 4.1 deep dive - part 2VMware vSphere 4.1 deep dive - part 2
VMware vSphere 4.1 deep dive - part 2
 
An Introduce of OPNFV (Open Platform for NFV)
An Introduce of OPNFV (Open Platform for NFV)An Introduce of OPNFV (Open Platform for NFV)
An Introduce of OPNFV (Open Platform for NFV)
 
EYWA Presentation v0.1.27
EYWA Presentation v0.1.27EYWA Presentation v0.1.27
EYWA Presentation v0.1.27
 
KVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStackKVM and docker LXC Benchmarking with OpenStack
KVM and docker LXC Benchmarking with OpenStack
 
Installation of pfSense on Soekris 6501
Installation of pfSense on Soekris 6501Installation of pfSense on Soekris 6501
Installation of pfSense on Soekris 6501
 
Installation of pfSense on Soekris 6501
Installation of pfSense on Soekris 6501Installation of pfSense on Soekris 6501
Installation of pfSense on Soekris 6501
 

Mais de David Pasek

FlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual ArchitectureFlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual ArchitectureDavid Pasek
 
Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2David Pasek
 
E tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchuE tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchuDavid Pasek
 
Architektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě InternetArchitektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě InternetDavid Pasek
 
Exchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československéExchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československéDavid Pasek
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture componentsDavid Pasek
 
FlexBook overview - v2.4
FlexBook overview - v2.4FlexBook overview - v2.4
FlexBook overview - v2.4David Pasek
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16David Pasek
 
Hybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAILHybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAILDavid Pasek
 
Private IaaS Cloud Provider
Private IaaS Cloud ProviderPrivate IaaS Cloud Provider
Private IaaS Cloud ProviderDavid Pasek
 
Spectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQSpectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQDavid Pasek
 
FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0David Pasek
 
Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3David Pasek
 
FlexBook basic overview v2.0
FlexBook basic overview v2.0FlexBook basic overview v2.0
FlexBook basic overview v2.0David Pasek
 
FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1David Pasek
 
CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)David Pasek
 
Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0David Pasek
 
Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?David Pasek
 
Rezervační systém Flexbook - stručný přehled v.0.8
Rezervační systém Flexbook - stručný přehled v.0.8Rezervační systém Flexbook - stručný přehled v.0.8
Rezervační systém Flexbook - stručný přehled v.0.8David Pasek
 
Creating content packs in VMware LogInsight
Creating content packs in VMware LogInsightCreating content packs in VMware LogInsight
Creating content packs in VMware LogInsightDavid Pasek
 

Mais de David Pasek (20)

FlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual ArchitectureFlexBook Software - Conceptual Architecture
FlexBook Software - Conceptual Architecture
 
Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2Flex Cloud - Conceptual Design - ver 0.2
Flex Cloud - Conceptual Design - ver 0.2
 
E tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchuE tourism v oblasti cestovního ruchu
E tourism v oblasti cestovního ruchu
 
Architektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě InternetArchitektura a implementace digitálních knihoven v prostředí sítě Internet
Architektura a implementace digitálních knihoven v prostředí sítě Internet
 
Exchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československéExchange office 3.0 - Stanovisko Státní banky československé
Exchange office 3.0 - Stanovisko Státní banky československé
 
vSAN architecture components
vSAN architecture componentsvSAN architecture components
vSAN architecture components
 
FlexBook overview - v2.4
FlexBook overview - v2.4FlexBook overview - v2.4
FlexBook overview - v2.4
 
VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16VMware HCI solutions - 2020-01-16
VMware HCI solutions - 2020-01-16
 
Hybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAILHybrid cloud overview and VCF on VxRAIL
Hybrid cloud overview and VCF on VxRAIL
 
Private IaaS Cloud Provider
Private IaaS Cloud ProviderPrivate IaaS Cloud Provider
Private IaaS Cloud Provider
 
Spectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQSpectre/Meltdown security vulnerabilities FAQ
Spectre/Meltdown security vulnerabilities FAQ
 
FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0FlexBook Basic Overview - v2.0
FlexBook Basic Overview - v2.0
 
Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3Spectre meltdown performance_tests - v0.3
Spectre meltdown performance_tests - v0.3
 
FlexBook basic overview v2.0
FlexBook basic overview v2.0FlexBook basic overview v2.0
FlexBook basic overview v2.0
 
FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1FlexBook - reservation system basic overview v1.1
FlexBook - reservation system basic overview v1.1
 
CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)CLI for VMware Distributed Switch (Community project)
CLI for VMware Distributed Switch (Community project)
 
Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0Dell VLT reference architecture v2 0
Dell VLT reference architecture v2 0
 
Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?Metro Cluster High Availability or SRM Disaster Recovery?
Metro Cluster High Availability or SRM Disaster Recovery?
 
Rezervační systém Flexbook - stručný přehled v.0.8
Rezervační systém Flexbook - stručný přehled v.0.8Rezervační systém Flexbook - stručný přehled v.0.8
Rezervační systém Flexbook - stručný přehled v.0.8
 
Creating content packs in VMware LogInsight
Creating content packs in VMware LogInsightCreating content packs in VMware LogInsight
Creating content packs in VMware LogInsight
 

Último

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitecturePixlogix Infotech
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentationphoebematthew05
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksSoftradix Technologies
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptxLBM Solutions
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024BookNet Canada
 

Último (20)

Understanding the Laravel MVC Architecture
Understanding the Laravel MVC ArchitectureUnderstanding the Laravel MVC Architecture
Understanding the Laravel MVC Architecture
 
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptxVulnerability_Management_GRC_by Sohang Sengupta.pptx
Vulnerability_Management_GRC_by Sohang Sengupta.pptx
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
costume and set research powerpoint presentation
costume and set research powerpoint presentationcostume and set research powerpoint presentation
costume and set research powerpoint presentation
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Benefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other FrameworksBenefits Of Flutter Compared To Other Frameworks
Benefits Of Flutter Compared To Other Frameworks
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
Key Features Of Token Development (1).pptx
Key  Features Of Token  Development (1).pptxKey  Features Of Token  Development (1).pptx
Key Features Of Token Development (1).pptx
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC BiblioShare - Tech Forum 2024
 

VMware ESXi - Intel and Qlogic NIC throughput difference v0.6

  • 1. VMware ESXi - Intel and Qlogic NIC throughput difference ABSTRACT: We are observing different network throughput on Intel X710 NICs and QLogic FastLinQ QL41xxx NIC. ESXi hardware supports NIC hardware offloading and queueing on 10Gb, 25Gb, 40Gb and 100Gb NIC adapters. Multiple hardware queues per NIC interface (vmnic) and multiple software threads on ESXi VMkernel is depicted and documented in this paper which may or may not be the root cause of the observed problem. The key objective of this document is to clearly document and collect NIC information on two specific Network Adapters and do a comparison to find the difference or at least root cause hypothesis for further troubleshooting. Date: 2020-10-28 Author: David Pasek, VMware TAM, dpasek@vmware.com
  • 2. RESOURCES: Author Resource Name / Resource Locator Niels Hagoort Frank Denneman VMware vSphere 6.5 Host Resource Deep Dive Book Niels Hagoort Virtual Machine Tx threads explained https://nielshagoort.com/2017/07/18/virtual-machine-tx-threads-explained/ Lenin Singaravelu Haoqiang Zheng VMworld 2013 : Extreme Performance Series : Network Speed Ahead https://www.slideshare.net/VMworld/vmworld-2013-extreme-performance- series-network-speed-ahead Rishi Mehta Leveraging NIC Technology to Improve Network Performance in VMware vSphere https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/whitepaper/vmwar e-vsphere-pnics-performance-white-paper.pdf VMware KB RSS and multiqueue support in Linux driver for VMXNET3 https://kb.vmware.com/s/article/2020567 Niels Hagoort Frank Denneman VMworld 2016: vSphere 6.x Host Resource Deep Dive https://www.slideshare.net/VMworld/vmworld-2016-vsphere-6x-host- resource-deep-dive Dmitry Mishchenko Enable RSS in the ESXi host https://austit.com/faq/323-enable-pnic-rss Rishi Mehta Network Improvements in vSphere 6 Boost Performance for 40G NICs https://blogs.vmware.com/performance/2015/04/network-improvements- vsphere-6-boost-performance-40g-nics.html Marvell Marvell® Converged Network Adapters 41000 Series Adapters https://www.marvell.com/content/dam/marvell/en/public- collateral/dell/dell-marvell-converged-network-adapter-41xxx-user-guide.pdf VMware Performance Best Practices for VMware vSphere 6.5 https://learnvmware.online/wp- content/uploads/2018/02/perf_best_practices_vsphere65.pdf VMware KB Intermittent network issues during vSAN/vMotion traffic with qedentv driver (68147) - https://kb.vmware.com/s/article/68147?lang=en_US All credits for detail information about ESXi network stack go to Niels Hagoort and his book VMware vSphere 6.5 Host Resource Deep Dive.
  • 3. Problem description When we run VMs on ESXi 6.7 host with NIC Intel(R) Ethernet Controller X710 for 10GbE SFP+ we see the total network throughput 300 MB/s (~50% transmit, ~50% receive) which is approximately ~1.5 Gbps Tx / ~1.5 Gbps Rx When we run VMs on ESXi 6.7 host with NIC Qlogic FastLinQ QL41xxx 1/10/25 GbE Ethernet Adapter we see the total network throughput 100 MB/s (~50% transmit, ~50% receive) which is approximately ~0.5 Gbps Tx / ~0.5 Gbps Rx Following is screenshot provided by application owner of their application latency monitoring, the spike at about 1:18 – 1:20PM when we were testing moving (via vMotion) a single VM to a host with qlogic network card where no other VM was running and then migrating it back. The latency increased from typical 20 ms to almost 2000 ms make it application unresponsive and useless. The performance problem is observed only on OpenShift environment where majority of network traffic goes through internal OpenShift (K8s) Load Balancer. OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a load balancer. This means that network traffic is to/from single VM, single vNIC and single MAC address. The architecture is visualized in figure below.
  • 4. Now, the question is why the same VM running on the identical server hardware except NIC, is able to easily handle 300 MB/s (~ 3 Gbps) on Intel NIC and only 100 MB/s (~ 1 GBps) on Qlogic NIC? NIC Capabilities Comparison - Intel versus QLogic In the table below, we have the comparison of key network capabilities and differences between ESXi hosts having Intel and QLogic network adapter. Capability Intel X710 QLogic QL41xxx Diff Driver type (INBOX/ASYNC) ASYNC ASYNC TSO/LSO Enabled, supported Enabled, supported LRO Enabled Enabled CSO On On # of VMDq (hw queues) Up to 256 Up to 208 Net Queue Count / vmnic 8 Rx Netqueues 8 Tx Netqueues 20 Rx Netqueues 20 Tx Netqueues More software queues observed in Qlogic. But they are created dynamically based on demand. Net Filter Classes MacOnly = true VlanOnly = true VlanMac = true Vxlan = true Geneve = true GRE = false MacOnly = true VlanOnly = false VlanMac = false Vxlan = true Geneve = true GRE = false QLogic does not support VlanOnly and VlanMac, but it should not be big deal. Net Queue load balancer settings * RxQPair = SA RxQNoFeature = ND PreEmptibleQ = UA RxQLatency = UA RxDynamicLB = NA DynamicQPool = SA MacLearnLB = NA RSS = UA LRO = UA GeneveOAM = UA RxQPair = SA RxQNoFeature = ND PreEmptibleQ = UA RxQLatency = UA RxDynamicLB = NA DynamicQPool = SA MacLearnLB = NA RSS = UA LRO = SA GeneveOAM = UA LRO difference Unsupported on Intel. Supported on QLogic. Net Queue balancer state Enabled Enabled RX/TX ring buffer current parameters RX: 1024 RX Mini: 0 RX Jumbo: 0 TX: 1024 RX: 8192 RX Mini: 0 RX Jumbo: 0 TX: 8192 QLogic has deeper buffers It should be advantage. RX/TX ring buffer max values RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096 RX: 8192 RX Mini: 0 RX Jumbo: 0 TX: 8192 QLogic supports deeper buffers It should be advantage. RSS NO Not available in driver version 1.9.5 See.VMware HCL YES RSS: not set explicitly RSS difference ESXi with Intel NIC does not support RSS. ESXi with Qlogic NIC uses RSS. # of NetPoll (RX) Threads ** 16 32 For QLogic VMkernel more software threads are observed. This should be advantage.
  • 5. # of NetPoll TX Threads / vmnic *** 1 TBD, most probably 1 Legend: * S:supported, U:unsupported, N:not-applicable, A:allowed, D:disallowed ** NetPoll Threads (are software threads between VMNIC and VSWITCH for data transmit/receive ** TXPool Threads are software threads between VMNIC and VSWITCH for data transmit/receive Questions and answers Q: How NetQueue Net Filter Class MacOnly works? Is it load balancing based on source, destination or both src-dst MAC addresses? A: Based on destination MAC address. Q: How to get and validate RSS information from Intel driver module? A: This information is publicly documented in VMware HCL (VMware Compatibility Guide, aka VCG) on particular NIC record. Technically, you can list driver module parameters by command esxcli system module parameters list -m <DRIVER-MODULE-NAME> and check if there are RSS relevant parameters. Q: How to validate RSS is really active in NIC driver? A: You can use ESXi shell command (for further details see section How to validate RSS is enabled in VMkernel) vsish -e cat /net/pNics/vmnic1/rxqueues/info Suspicious and hypothesis Based on observed behavior, it seems that the culprit is Qlogic NIC driver/firmware. Suspicious #1: The problem might or might not be caused by some known or unknown bug in NIC driver/firmware. Such bug can be related to TSO (LRO), CSO, VMDq/Netqueue, etc. Hypothesis #2: Intel X710 with use driver (1.9.5), does not support RSS. However, QLogic supports RSS which is enabled by default. RSS can be leveraged for advanced queueing, multi-threading and load balancing potentially improving ingress and egress network throughput for a single VM across NIC hardware queues and software threads within ESXi VMkernel because of using more CPU cycles from physical CPU cores for network operations. However, Netqueue and Netqueue RSS brings additional software complexity into NIC driver which some potential for a bug. See. KB https://kb.vmware.com/s/article/68147 for example of such bug. Hypothesis #3: Other software bug in QLogic driver/firmware affecting network throughput. However, the problem can be totally different. The root cause cannot be fully proven, and problem successfully resolved without test environment (2x ESXi hosts with Intel X710, 2x ESXi hosts with QLogic FastLinQ QL41xxx) and real performance testing.
  • 6. Theory - NIC offload capabilities, queueing and multitasking In this section, we are describing how different technologies works. Specific information from the environment about Intel X710 and QLogic QL41xxx can be found in section “Diagnostic commands”. ESXi hardware inventory Before any troubleshooting or design validation, it is very good idea to collect hardware and ESXi inventory details. Following commands can be used for inventory of tested system. esxcli system version get esxcli hardware platform get esxcli hardware cpu global get smbiosDump WebBrowser https://192.168.4.121/cgi-bin/esxcfg-info.cgi NIC Driver Driver and firmware identification Another important step is driver and firmware identification. The NIC details, firmware and driver versions, and driver module name can be identified by command esxcli network nic get -n vmnic1 where the output is like [root@esx21:~] esxcli network nic get -n vmnic0 Advertised Auto Negotiation: true Advertised Link Modes: Auto, 1000BaseT/Full, 100BaseT/Half, 100BaseT/Full, 10BaseT/Half, 10BaseT/Full Auto Negotiation: true Cable Type: Twisted Pair Current Message Level: 7 Driver Info: Bus Info: 0000:01:00:0 Driver: ntg3 Firmware Version: bc 1.39 ncsi 1.5.12.0 Version: 4.1.3.2 Link Detected: true Link Status: Up Name: vmnic0 PHYAddress: 0 Pause Autonegotiate: true Pause RX: true Pause TX: true Supported Ports: TP Supports Auto Negotiation: true Supports Pause: true Supports Wakeon: true Transceiver: internal Virtual Address: 00:50:56:51:8a:31 Wakeon: Disabled
  • 7. It may come in handy to have more information available about the pNIC and the driver used if the HCL has a lot of listings. In that case, you also need the hardware ID properties to make sure you are looking at the correct driver in the HCL: o Vendor-ID(VID) o Device-ID(DID) o Sub-Vendor-ID(SVID) o Sub-Device-ID(SDID) To extract that information from your ESXi host, you can use the command vmkchdev –l | grep vmnic This will list additional hardware ID information about the vmnics like … [root@esx21:~] vmkchdev -l | grep vmnic 0000:01:00.0 14e4:165f 1028:1f5b vmkernel vmnic0 0000:01:00.1 14e4:165f 1028:1f5b vmkernel vmnic1 0000:02:00.0 14e4:165f 1028:1f5b vmkernel vmnic2 0000:02:00.1 14e4:165f 1028:1f5b vmkernel vmnic3 To understand what drivers are “Inbox” (aka native VMware) or “Async” (from partners like Intel or Marvel/QLogic) you have to list vibs and check the vedor. esxcli software vib list [root@czchoes595:~] esxcli software vib list Name Version Vendor Acceptance Level Install Date ----------------------------- ---------------------------------- --------- ---------------- ------------ lsi-mr3 7.706.08.00-1OEM.670.0.0.8169922 Avago VMwareCertified 2020-09-15 bnxtnet 214.0.230.0-1OEM.670.0.0.8169922 BCM VMwareCertified 2020-09-15 bnxtroce 214.0.187.0-1OEM.670.0.0.8169922 BCM VMwareCertified 2020-09-15 elx-esx-libelxima-8169922.so 12.0.1188.0-03 ELX VMwareCertified 2020-09-15 brcmfcoe 12.0.1278.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15 elxiscsi 12.0.1188.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15 elxnet 12.0.1216.4-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15 lpfc 12.4.270.6-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15 amsd 670.11.5.0-16.7535516 HPE PartnerSupported 2020-09-15 bootcfg 6.7.0.02-06.00.14.7535516 HPE PartnerSupported 2020-09-15 conrep 6.7.0.03-04.00.34.7535516 HPE PartnerSupported 2020-09-15 cru 670.6.7.10.14-1OEM.670.0.0.7535516 HPE PartnerSupported 2020-09-15 fc-enablement 670.3.50.16-7535516 HPE PartnerSupported 2020-09-15 hponcfg 6.7.0.5.5-0.18.7535516 HPE PartnerSupported 2020-09-15 ilo 670.10.2.0.2-1OEM.670.0.0.7535516 HPE PartnerSupported 2020-09-15 oem-build 670.U3.10.5.5-7535516 HPE PartnerSupported 2020-09-15 scsi-hpdsa 5.5.0.68-1OEM.550.0.0.1331820 HPE PartnerSupported 2020-09-15 smx-provider 670.03.16.00.3-7535516 HPE VMwareAccepted 2020-09-15 ssacli 4.17.6.0-6.7.0.7535516.hpe HPE PartnerSupported 2020-09-15 sut 6.7.0.2.5.0.0-83 HPE PartnerSupported 2020-09-15 testevent 6.7.0.02-01.00.12.7535516 HPE PartnerSupported 2020-09-15 i40en 1.9.5-1OEM.670.0.0.8169922 INT VMwareCertified 2020-09-15 The command below can help you to list driver module parameters used for advanced settings. esxcli system module parameters list -m <DRIVER-MODULE-NAME> Equivalent command with little bit more details vmkload_mod -s <DRIVER-MODULE-NAME> Command esxcfg-module can show more information for particular VMkernel module. Historically there is another command (esxcfg-module) to work with VMkernel modules. For example, to show detail module info, you can use esxcfg-module -i ntg3
  • 8. NIC Driver update Run this command to install drivers from the VIB file esxcli software vib install -v /path/async-driver.vib For more information about installing async drivers in ESXi 5.x and 6.x using esxcli and async driver VIB file, see VMware KB 2137854 at https://kb.vmware.com/s/article/2137854?lang=en_us TSO (aka LRO) TCP Segmentation Offload (TSO) is the equivalent to TCP/IP Offload Engine (TOE) but more modeled on virtual environment, where TOE is the actual NIC vendor hardware enhancement. It is also known as Large Segment Offload (LSO). To fully benefit from the performance enhancement, you must enable TSO along the complete data path on an ESXi host. If TSO is supported on the NIC it is enabled by default. The same goes for TSO in the VMkernel layer and for the VMXNET3 VM adapter but not per se for the TSO configuration within the guest OS. Large Receive Offload (LRO) can be seen as the exact opposite feature to TSO/LSO. It is a technique that aggregates multiple inbound network packets from a single stream into larger packets and transfers the resulting larger, but fewer packets to the network stack of the host or VM guest OS TCP stack. This process results in less CPU overhead because the CPU has fewer packets to process compared to LRO being disabled. The important trade-off with LRO is that it lowers CPU overhead and potentially improves network throughput, but adds latency to the network stack. The potential higher latency introduced by LRO is a result of the time spent aggregating smaller TCP segments into a larger segment. LRO in the ESXi host By default, a host is configured to use hardware TSO if its NICs support the feature. To check the LRO configuration for the default TCP/IP stack on the ESXi host, execute the following command to display the current LRO configuration values: esxcli system settings advanced list -o /Net/TcpipDefLROEnabled You are able to check the length of the LRO buffer by using the following esxcli command: esxcli system settings advanced list - o /Net/VmxnetLROMaxLength The LRO features are functional for the guest OS when the VMXNET3 virtual adapter is used. To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO, software LRO) can be issued: esxcli system settings advanced list -o /Net/Vmxnet3HwLRO esxcli system settings advanced list -o /Net/Vmxnet3SwLRO You can disable LRO for all VMkernel adapters on a host with command esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 0 and enabling LRO with esxcli system settings advanced set -o /Net/TcpipDefLROEnabled -i 1
  • 9. LRO in VMXNET3 Large Receive Offload (LRO) feature of VMXNET3 helps deliver high throughput with lower CPU utilization is Large Receive Offload (LRO), which aggregates multiple received TCP segments into a larger TCP segment before delivering it up to the guest TCP stack. However, for latency-sensitive applications that rely on TCP, the time spent aggregating smaller TCP segments into a larger one adds latency. It can also affect TCP algorithms like delayed ACK, which now cause the TCP stack to delay an ACK until the two larger TCP segments are received, also adding to end-to-end latency of the application. Therefore, you should also consider disabling LRO if your latency-sensitive application relies on TCP. To do so for Linux guests, you need to reload the VMXNET3 driver in the guest: shell# modprobe -r vmxnet3 Add the following line in /etc/modprobe.conf (Linux version dependent): options vmxnet3 disable_lro=1 Then reload the driver using: shell# modprobe vmxnet3 CSO (Checksum Offload) Checksum Offload (CSO) or TCP Checksum Offloading (TCO) eliminates the host overhead introduced by check-summing for TCP packets. With checksum offloading enabled, checksum calculations are allowed on the NIC chipset. The following command provides information about the checksum offload settings on your ESXi host: esxcli network nic cso get VMDq / NetQueue VMDq (Virtual Machine Device Queues) is the hardware feature, NetQueue is the feature baked into vSphere. Intel’s Virtual Machine Device queues (VMDq) is the hardware component used by VMware’s NetQueue software feature since ESX 3.5. VMDq is a silicon-level technology that can offload network I/O management burden from ESXi. Dynamic NetQueue was introduced with the release of ESXi 5.5. Multiple queues and sorting intelligence in the chipset support enhanced network traffic flow in the virtual environment and by doing so freeing processor cycles for application workloads. This improves efficiency in data transactions toward the destined VM, and increases overall system performance. The VMDq feature, in collaboration with VMware Dynamic NetQueue, allows network packets to be distributed over different queues. Each queue gets its own ESXi thread for packet processing. One ESXi thread represents a CPU core. When data packets are received on the network adapter, a layer- 2 classifier in the VMDq enabled network controller sorts and determines which VM each packet is destined for. After setting the classifier it places the packet in the receive queue assigned to that VM. ESXi is now only responsible for transferring the packets to the respective VM rather than doing the heavy lifting of sorting the packets on the incoming network streams. That is how VMDq and NetQueue manage to deliver efficiency for CPU utilization in your ESXi host. NetQueue is enabled by default when supported by the underlying network adapter.
  • 10. Net Filter Classes Following esxcli command gives information about the filters supported per vmnic and used by NetQueue. esxcli network nic queue filterclass list Net Queue Support Get netqueue support on VMkernel esxcli system settings kernel list | grep netNetqueueEnabled [root@esx21:~] esxcli system settings kernel list | grep netNetqueueEnabled netNetqueueEnabled Bool TRUE TRUE TRUE Enable/Disable NetQueue support. The output contains Name, Type, Configured, Runtime, Default, Description. Net Queue Count Following command is used to get netqueue count on a NICs esxcli network nic queue count get This will output the current queues for all vmnics in your ESXi host. List the load balancer settings List the load balancer settings of all the installed and loaded physical NICs. (S:supported, U:unsupported, N:not-applicable, A:allowed, D:disallowed). esxcli network nic queue loadbalancer list Details of netqueue balancer plugins Details of netqueue balancer plugins on all physical NICs currently installed and loaded on the system esxcli network nic queue loadbalancer plugin list Net Queue balancer state Netqueue balancer state of all physical NICs currently installed and loaded on the system esxcli network nic queue loadbalancer state list RX/TX ring buffer current parameters The ring is the representation of the device RX/TX queue. It is used for data transfer between the kernel stack and the device. Get current RX/TX ring buffer parameters of a NIC esxcli network nic ring current get RX/TX ring buffer parameters max values The ring is the representation of the device RX/TX queue. It is used for data transfer between the kernel stack and the device. Get preset maximums for RX/TX ring buffer parameters of a NIC. esxcli network nic ring preset get -n vmnic0
  • 11. SG (Scatter and Gather) Scatter and Gather (Vectored I/O) is a concept that was primarily used in hard disks and it enhances large I/O request performance, if supported by the hardware. The best explanation I have found about Scatter and Gather is available at https://stackoverflow.com/questions/9770125/zero-copy- with-and-without-scatter-gather-operations. In general, it is about minimizing CPU cycles for I/O traffic. esxcli network nic sg get List software simulation settings List software simulation settings of physical NICs currently installed and loaded on the system. esxcli network nic software list RSS – 5-tuple hash queue load balancing What is RSS? Receive Side Scaling (RSS) has the same basic functionality that (Dynamic) NetQueue supports, it provides load balancing in processing received network packets. RSS resolves the single-thread bottleneck by allowing the receive side network packets from a pNIC to be shared across multiple CPU cores. The big difference between VMDq and RSS is that RSS uses more sophisticated filters to balance network I/O load over multiple threads. Depending on the pNIC and its RSS support, RSS can use up to a 5-tuple hash to determine the queues to create and distribute network IO’s over. A 5-tuple hash consists of the following data: o SourceIP o DestinationIP o Sourceport o Destinationport o Protocol Why use RSS? Receive Side Scaling (RSS) is a feature that allows network packets from a single NIC to be scheduled in parallel on multiple CPUs by creating multiple hardware queues. While this might increase network throughput for a NIC that receives packets at a high rate, it can also increase CPU overhead. When using certain 10Gb/s or 40Gb/s Ethernet physical NICs, ESXi allows the RSS capabilities of the physical NICs to be used by the virtual NICs. This can be especially useful in environments where a single MAC address gets large amounts of network traffic (for example VXLAN or network-intensive virtual appliances). Because of the potential for increased CPU overhead, this feature is disabled by default.
  • 12. RSS Design Considerations Does it make sense to enable RSS? Usually not, because single VM throughput between 3 and 6 Gbps is typically good enough even you use multiple 25 Gb or even 100 Gb NICs per ESXi hosts. What is the impact of enabling RSS end to end for all VMs in ESXi host? Well, it would unlock the bandwidth for all VMs but in cost of additional physical CPU cycles and software threads in VMkernel. It all depends on your particular requirements but more often you should disable end to end RSS per particular VM where huge network throughput is required, such as • Virtual Machines used for NFV (Network Function Virtualization) like o NSX-T Edge Nodes where VMware is providing NSX network functions like North- South routing, Load Balancing, NAT, etc. o Virtualized Load balancers o Other vendors virtual appliances for NFV • Virtual Machines used for Software Defined Storages where huge data transfers over network are required • Container hosts • Virtualized Big data engines • Etc. RSS Implementation details The pNIC chipset matters because of the way the RSS feature scales its queues. Depending on the vendor and model, there are differences in how RSS is supported and how queues are scaled. If RSS is enabled in pNIC driver and VMkernel depends on specific NIC driver. It needs to be enabled in the driver module and it depends on the used driver module if the RSS parameters should be set to enable it or the driver enables RSS by default. The problem with the driver module settings is that it is not always clear on what values to use in the configuration. The description of the driver module parameters differs a lot among the various driver modules. That won’t be a problem if the value of choice is either zero or one, but it is when you are expected to configure a certain number of queues. The RSS driver module settings are a perfect example of this. The driver details are given when executing the command esxcli system module parameters list -m ixgbe | grep "RSS" The result of the number of queues configured determines how many CPU cycles can be consumed by incoming network traffic, therefore, RSS enablement should be decided based on specific requirements. If single VM (vNIC) need to receive large network throughput requiring more CPU cycles, RSS should be enabled on ESXi host. If single thread is enough even for the largest required throughput, RSS can be disabled. Now the obvious question is, how to check RSS is supported and enabled. How to validate RSS is supported There are two ways how to validate RSS is supported by particular NIC driver.
  • 13. 1. Check NIC adapter on VMware HCL (http://vmware.com/go/hcl) and look at particular driver version. See two examples below. Intel i40en Marvell - QLogic qedentv 2. List driver module parameters and check if there are some RSS related parameters Intel i40en driver do not list anything RSS related, probably because no support I have been assured by VMware Engineering that inbox driver i40en 1.9.5 does not support RSS How to validate RSS is enabled in VMkernel If you have running system, you can check the status of RSS by following command from ESXi shell vsish -e cat /net/pNics/vmnic1/rxqueues/info In figure below, you can see the command output for 1Gb Intel NIC not supporting NetQueue, therefore RSS is logically not supported as well, because it does not make any sense. Figure 1 Command to validate if RSS is enabled in VMkernel It seems, that some drivers enabling RSS by default and some others not. How to explicitly enable RSS in the NIC driver The procedure to enable RSS is always dependent on specific driver, because specific parameters have to be passed to driver module. The information how to enable RSS for particular driver should be written in specific NIC vendor documentation. Example for Intel ixgbe driver:
  • 14. vmkload_mod ixgbe RSS=”4″ To enable the feature on multiple Intel 82599EB SFI/SFP+ 10Gb/s NICs, include another comma- separated 4 for each additional NIC (for example, to enable the feature on three such NICs, you'd run vmkload_mod ixgbe RSS="4,4,4"). Example for Mellanox nmlx4driver: For Mellanox adapters, the RSS feature can be turned on by reloading the driver with num_rings_per_rss_queue=4. vmkload_mod nmlx4_en num_rings_per_rss_queue=4 NOTE: After loading the driver with vmkload_mod, you should make vmkdevmgr rediscover the NICs with the following command: kill -HUP ID … where ID is the process ID of the vmkdevmgr process RSS in Virtual Machine settings Additional advanced settings must be added into .vmx or advanced config of particular VM to enable mutlti-queue support. These settings are • ethernetX.pnicFeatures • ethernetX.ctxPerDev • ethernetX.udpRSS Let’s explain the purpose of each advanced setting above. To enable multi-queue (NetQueue RSS) in particular VM vNIC ethernetX.pnicFeatures = “4” To allow multiple (2 to 8 ) TX threads for particular VM vNIC ethernetX.ctxPerDev = “3” To boost RSS performance, the vSphere 6.7 release includes vmxnet3 version 4, which supports some new features, including Receive Side Scaling (RSS) for UDP, RSS for ESP, and offload for Geneve/VXLAN. Performance tests reveal significant improvement in throughput. ethernetX.udpRSS = “1” These improvements are beneficial for HPC financial service workloads that are sensitive to networking latency and bandwidth. The vSphere 6.7 release includes vmxnet3 version 4, which supports some new features. • RSS for UDP - Receive side scaling (RSS) for the user data protocol (UDP) is now available in the vmxnet3 v4 driver. Performance testing of this feature showed a 28% improvement in receive packets per second. The test used 64-byte packets and four receive queues. • RSS for ESP – RSS for encapsulating security payloads (ESP) is now available in the vmxnet3 v4 driver. Performance testing of this feature showed a 146% improvement in receive packets per second during a test that used IPSec and four receive queues. • Offload for Geneve/VXLAN – Generic network virtualization encapsulation (Geneve) and VXLAN offload is now available in the vmxnet3 v4 driver. Performance testing of this feature
  • 15. showed a 415% improvement in throughput in a test that used a packet size of 64 K with eight flows. Source: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance /whats-new-vsphere67-perf.pdf RSS in the Guest OS (vRSS) To fully make use of the RSS mechanism, an end-to-end implementation is recommended. That means you will need to enable and configure RSS in the guest OS in addition to the VMkernel driver module. Multi-queuing is enabled by default in Linux guest OS when the latest VMware tools version (version 1.0.24.0 or later) is installed or when the Linux VMXNET3 driver version 1.0.16.0-k or later is used. Prior to these versions, you were required to manually enable multi-queue or RSS support. Be sure to check the driver and version used to verify if your Linux OS has RSS support enabled by default. Driver version within linux OS can be checked by following command: # modinfo vmxnet3 You can determine the number of Tx and Rx queues allocated for a VMXNET3 driver on by running the ethtool console command in the Linux guest operating system: ethtool -S ens192 How to disable Netqueue RSS for particular driver Disabling Netqueue RSS is also driver specific. It can be done using driver module parameter as shown below. The example assumes there are four qedentv (QLogic NIC) instances. [root@host:~] esxcfg-module -g qedentv qedentv enabled = 1 options = '' [root@host:~] esxcfg-module -s "num_queues=0,0,0,0 RSS=0,0,0,0" qedentv [root@host:~] esxcfg-module -g qedentv qedentv enabled = 1 options = 'num_queues=0,0,0,0 RSS=0,0,0,0' Reboot the system for settings to take effect and will apply to all NICs managed by the qedentv driver. Source: https://kb.vmware.com/s/article/68147 How to disable RSS load balancing plugin in VMkernel Particular Netqueue load balancing plugin can be totally disabled in VMkernel. Here is an example how to disable RSS load balancing esxcli network nic queue loadbalancer set --rsslb=false -n vmnicX You can do it for example when toubleshooting some software issue with the load-based netqueue balancer module like described in VMware KB https://kb.vmware.com/s/article/58874
  • 16. How to disable Netqueue in VMkernel Netqueue can be totally disabled in VMkernel esxcli system settings kernel set --setting="netNetqueueEnabled" --value="FALSE" It should be noted that disabling netqueue will result in some performance impact. The magnitude of the impact will depend on individual workloads and should be characterized before deploying the workaround in production. VMKernel multi-threading System performance is not only about the hardware capabilities but also about the software capabilities. To achieve higher throughput, parallel operations are required, which is in software achieved by multi-threading. RX Threads Each pNIC in ESXi host is equipped with one Rx (Netpoll) thread by default. ESXi VMkernel threads and queues without VMDq and NetQueue are depicted on the figure below. Figure 2 ESXi VMkernel threads and queues without VMDq and NetQueue
  • 17. However, when the pNIC supports the NetQueue or RSS feature, the Netpoll threads are scaled. Dynamic NetQueue starts some number of RX threads (NetPoll Threads) and share it for all VMD queues until it requires more CPU resources. If more resources are required, additional RX NetQueues (NetPoll) threads are automatically created. The reason for such behavior is to avoid resource waste (CPU, RAM) unless it is necessary. On figure below, you see ESXi VMkernel with NetQueue and RSS enabled. When RSS is enabled end- to-end, multiple RX threads are leveraged per VMD/RSS queue. Such multi-threading can boos receive network traffic. Figure 3 VMkernel network threads on NICs supporting VMDq and RSS is enabled RSS is all about receive network traffic. However, transmit part of the story should be considered as well. This where TX Threads come in to play.
  • 18. TX Threads As you can see in the “Figure 3 VMkernel network threads on NICs supporting VMDq”, ESXi VMkernel is using only one Transmit (Tx) thread per VM by default. Advanced VM settings has to be used to scaling up Tx Threads and leverage more physical CPU Cores to generate higher transmit network throughput from particular VM. Figure 4 VMkernel network threads on NICs supporting VMDq /w RSS and one VM (VM2) configured for TX Thread scaling Without advanced VM settings for TX Thread scaling, you can see one transmit (TX) thread per VM vNIC (sys - NetWorld), even VM has for example 24 vCPUs as depicted in the output below. {"name": "vmname-example-01.eth1", "switch": "DvsPortset-0", "id": 50331663, "mac": "00:50:56:9a:41:6e", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 7872, "txmbps": 30.7, "txsize": 488, "txeps": 0.00, "rxpps": 7161, "rxmbps": 23.2, "rxsize": 404, "rxeps": 0.00, "vnic": { "type": "vmxnet3", "ring1sz": 1024, "ring2sz": 256, "tsopct": 0.1, "tsotputpct": 1.4, "txucastpct": 100.0, "txeps": 0.0, "lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 100.0, "rxeps": 0.0, "maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0, "txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0}, "rxqueue": { "count": 8}, "txqueue": { "count": 8}, "intr": { "count": 9 }, "sys": [ "2106431" ], "vcpu": [ "2106668", "2106670", "2106671", "2106672", "2106673", "2106674", "2106675", "2106676", "2106677", "2106678", "2106679", "2106680", "2106681", "2106682", "2106683", "2106684", "2106685", "2106686", "2106687", "2106688", "2106689", "2106690", "2106691", "2106692" ]}, Multi-threading configuration procedure
  • 19. To take full advantage of the Network Adapter's RSS capabilities, you must enable Receive Side Scaling (RSS) end-to-end in the ESXi host (NIC driver setting) to balance the CPU load across multiple Receive (RX) Threads leveraging multiple cores In the particular VM where RSS is required You may ask why such double step configuration is required. The reason for such explicit configuration is to avoid unwanted system overhead. Usually, RSS is not required for all VMs, but only for demanding VMs. Here is the procedure to enable RSS multi-threading Prerequisite: RSS must be enabled on ESXi NIC driver module. For details see section “How to validate RSS is enabled in VMkernel If you have running system, you can check the status of RSS by following command from ESXi shell vsish -e cat /net/pNics/vmnic1/rxqueues/info In figure below, you can see the command output for 1Gb Intel NIC not supporting NetQueue, therefore RSS is logically not supported as well, because it does not make any sense. Figure 1 Command to validate if RSS is enabled in VMkernel It seems, that some drivers enabling RSS by default and some others not. 1. How to explicitly enable RSS” 2. Additional advanced settings must be added into .vmx or advanced config of particular VM: # Enable multi-queue (NetQueue RSS) ethernetX.pnicFeatures = “4” # Allow multiple TX threads for single vm ethernetX.ctxPerDev = “3” Acceptable values for EthernetX.ctxPerDev are 1, 2, or 3 where, • Setting it to 1 results in 1 CPU thread per device vNIC • It is set to 2 as Default value which means 1 transmit CPU thread per VM • Setting it to 3 results in 2 to 8 CPU threads per device vNIC Default EthernetX.ctxPerDev value is 2, which means only one transmit (Tx) thread per whole VM. For VM with RSS, we want to use the value 3, because even we have a single vNIC object in virtual machine, from the vSphere administrator perspective, we want more TX Threads (up to 8). The real number of TX Threads depends on link-speed. For 100Gb link, number of TX queues is 8. VMkernel software threads per VMNIC can be identified by two following commands net-stats -A -t vW vsish
  • 20. /> cat /world/<WORLD-ID-1-IN-VMNIC>/name /> cat /world/<WORLD-ID-2-IN-VMNIC>/name /> cat /world/<WORLD-ID-3-IN-VMNIC>/name … /> cat /world/<WORLD-ID-n-IN-VMNIC>/name net-stats -A -t vW command displays the VMkernel network threads [root@esx21:~] net-stats -A -t vW Below is the sample net-stat -A -t vW output form ESXi host for educational purposes. To check if multiple software treads are used for vmnic0, use highlighted information below. [root@esx21:~] net-stats -A -t vW { "sysinfo": { "hostname": "esx21.home.uw.cz" }, "stats": [ { "time": 1602715000, "interval": 10, "iteration": 0, "ports":[ {"name": "vmnic0", "switch": "DvsPortset-1", "id": 33554434, "mac": "90:b1:1c:13:fc:14", "rxmode": 0, "tunemode": 2, "uplink": "true", "ens": false, "txpps": 79, "txmbps": 0.9, "txsize": 1446, "txeps": 0.00, "rxpps": 80, "rxmbps": 1.2, "rxsize": 1881, "rxeps": 0.00, "vmnic": {"devname": "vmnic0.ntg3", "txpps": 79, "txmbps": 0.9, "txsize": 1500, "txeps": 0.00, "rxpps": 80, "rxmbps": 1.3, "rxsize": 1959, "rxeps": 0.00 }, "sys": [ "2097643", "2097863", "2097864" ]}, {"name": "vmnic1", "switch": "DvsPortset-1", "id": 33554436, "mac": "90:b1:1c:13:fc:15", "rxmode": 0, "tunemode": 2, "uplink": "true", "ens": false, "txpps": 35, "txmbps": 1.5, "txsize": 5318, "txeps": 0.00, "rxpps": 54, "rxmbps": 0.2, "rxsize": 409, "rxeps": 0.00, "vmnic": {"devname": "vmnic1.ntg3", "txpps": 35, "txmbps": 1.6, "txsize": 5955, "txeps": 0.00, "rxpps": 54, "rxmbps": 0.2, "rxsize": 479, "rxeps": 0.00 }, "sys": [ "2097644", "2097867", "2097868" ]}, {"name": "vmk0", "switch": "DvsPortset-1", "id": 33554438, "mac": "90:b1:1c:13:fc:14", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 36, "txmbps": 1.5, "txsize": 5130, "txeps": 0.00, "rxpps": 46, "rxmbps": 0.2, "rxsize": 475, "rxeps": 0.00, "ipv4": { "txpps": 119, "txeps": 0, "txfrags": 0, "rxpps": 118, "rxdeliv": 118, "rxeps": 0, "rxreass": 0 }, "ipv6": { "txpps": 0, "txeps": 0, "txfrags": 0, "rxpps": 0, "rxdeliv": 0, "rxeps": 0, "rxreass": 0 }, "tcptx": { "pps": 108, "datapct": 73.4, "mbps": 2.4, "size": 2753, "delackpct": 30.8, "rexmit": 0.0, "sackrexmit": 0.0, "winprb": 0.0, "winupd": 0.8 }, "tcprx": { "pps": 115, "datapct": 70.6, "mbps": 1.4, "size": 1467, "dups": 0.0, "oo": 0.0, "winprb": 0.0, "winupd": 1.0, "othereps": 0.0}, "udp": {"txpps": 10, "rxpps": 3, "rxsockeps": 0.0, "rxothereps": 0.0}, "sys": [ "2097832", "2097833" ]}, {"name": "vmk1", "switch": "DvsPortset-1", "id": 33554439, "mac": "00:50:56:6e:d1:d0", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00, "sys": [ "2097834", "2097835" ]}, {"name": "vmk2", "switch": "DvsPortset-1", "id": 33554440, "mac": "00:50:56:60:e9:3a", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00, "sys": [ "2097836", "2097837" ]}, {"name": "vmk3", "switch": "DvsPortset-1", "id": 33554441, "mac": "00:50:56:65:4c:04", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 1, "txmbps": 0.0, "txsize": 678, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 114, "rxeps": 0.00, "sys": [ "2097838", "2097839" ]}, {"name": "vmk4", "switch": "DvsPortset-1", "id": 33554442, "mac": "00:50:56:6e:e8:f2", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00, "sys": [ "2097840", "2097841" ]}, {"name": "vmk5", "switch": "DvsPortset-1", "id": 33554443, "mac": "00:50:56:6c:28:c7", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false,
  • 21. "txpps": 71, "txmbps": 0.9, "txsize": 1564, "txeps": 0.00, "rxpps": 64, "rxmbps": 1.2, "rxsize": 2278, "rxeps": 0.00, "sys": [ "2097842", "2097843" ]}, {"name": "openmanage_enterprise.x86_64-0.0.1.eth0", "switch": "DvsPortset-1", "id": 33554444, "mac": "00:50:56:92:d9:c8", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 3, "rxmbps": 0.0, "rxsize": 60, "rxeps": 0.00, "vnic": { "type": "vmxnet3", "ring1sz": 1024, "ring2sz": 256, "tsopct": 0.0, "tsotputpct": 0.0, "txucastpct": 0.0, "txeps": 0.0, "lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 0.0, "rxeps": 0.0, "maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0, "txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0}, "rxqueue": { "count": 4}, "txqueue": { "count": 4}, "intr": { "count": 5 }, "sys": [ "2101072" ], "vcpu": [ "2101082", "2101084", "2101085", "2101086" ]}, {"name": "flb-mgr.eth0", "switch": "DvsPortset-1", "id": 33554445, "mac": "00:50:56:92:6f:3e", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00, "vnic": { "type": "vmxnet3", "ring1sz": 256, "ring2sz": 256, "tsopct": 0.0, "tsotputpct": 0.0, "txucastpct": 0.0, "txeps": 0.0, "lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 0.0, "rxeps": 0.0, "maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0, "txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0}, "rxqueue": { "count": 1}, "txqueue": { "count": 1}, "intr": { "count": 2 }, "sys": [ "2101254" ], "vcpu": [ "2101271", "2101273" ]}, {"name": "flb-node-001.eth0", "switch": "DvsPortset-1", "id": 33554446, "mac": "00:50:56:92:55:fa", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00, "vnic": { "type": "vmxnet3", "ring1sz": 256, "ring2sz": 256, "tsopct": 0.0, "tsotputpct": 0.0, "txucastpct": 0.0, "txeps": 0.0, "lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 0.0, "rxeps": 0.0, "maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0, "txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0}, "rxqueue": { "count": 1}, "txqueue": { "count": 1}, "intr": { "count": 2 }, "sys": [ "2101257" ], "vcpu": [ "2101274", "2101276" ]}, {"name": "W2K8R2-diag02.eth0", "switch": "DvsPortset-1", "id": 33554447, "mac": "00:50:56:92:f3:88", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00, "sys": [ "2101283" ], "vcpu": [ "2101290", "2101292" ]}, {"name": "Photon-01-ch01.home.uw.cz.eth0", "switch": "DvsPortset-1", "id": 33554448, "mac": "00:50:56:a8:49:0e", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 3, "rxmbps": 0.0, "rxsize": 68, "rxeps": 0.00, "vnic": { "type": "vmxnet3", "ring1sz": 256, "ring2sz": 128, "tsopct": 0.0, "tsotputpct": 0.0, "txucastpct": 0.0, "txeps": 0.0, "lropct": 0.0, "lrotputpct": 0.0, "rxucastpct": 0.0, "rxeps": 0.0, "maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0, "txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0}, "rxqueue": { "count": 1}, "txqueue": { "count": 1}, "intr": { "count": 2 }, "sys": [ "2101597" ], "vcpu": [ "2101746" ]}, {"name": "FreeBSD-01-is02.home.uw.cz.eth0", "switch": "DvsPortset-1", "id": 33554449, "mac": "00:0c:29:fd:04:87", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 2, "txmbps": 0.0, "txsize": 124, "txeps": 0.00, "rxpps": 5, "rxmbps": 0.0, "rxsize": 74, "rxeps": 0.00, "sys": [ "2101595" ], "vcpu": [ "2101748" ]}, {"name": "FreeBSD-01-is02.home.uw.cz.eth1", "switch": "DvsPortset-1", "id": 33554450, "mac": "00:50:56:92:ad:99", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 0, "txmbps": 0.0, "txsize": 0, "txeps": 0.00, "rxpps": 0, "rxmbps": 0.0, "rxsize": 0, "rxeps": 0.00, "sys": [ "2101595" ],
  • 22. "vcpu": [ "2101748" ]}, {"name": "vc01.home.uw.cz.eth0", "switch": "DvsPortset-1", "id": 33554453, "mac": "00:0c:29:8f:9e:1e", "rxmode": 0, "tunemode": 0, "uplink": "false", "ens": false, "txpps": 11, "txmbps": 0.0, "txsize": 267, "txeps": 0.00, "rxpps": 12, "rxmbps": 0.0, "rxsize": 505, "rxeps": 0.00, "vnic": { "type": "vmxnet3", "ring1sz": 4096, "ring2sz": 4096, "tsopct": 0.9, "tsotputpct": 13.3, "txucastpct": 99.1, "txeps": 0.0, "lropct": 9.9, "lrotputpct": 72.7, "rxucastpct": 71.9, "rxeps": 0.0, "maxqueuelen": 0, "requeuecnt": 0.0, "agingdrpcnt": 0.0, "deliveredByBurstQ": 0.0, "dropsByBurstQ": 0.0, "txdisc": 0.0, "qstop": 0.0, "txallocerr": 0.0, "txtsosplit": 0.0, "r1full": 0.0, "r2full": 0.0, "sgerr": 0.0}, "rxqueue": { "count": 4}, "txqueue": { "count": 4}, "intr": { "count": 5 }, "sys": [ "2104032" ], "vcpu": [ "2104245", "2104248", "2104249", "2104250" ]} ], "storage": { }, "vcpus": { "2101082": {"id": 2101082, "used": 2.29, "ready": 0.14, "cstp": 0.00, "name": "vmx-vcpu-0:openmanage_enterprise.x86_64-0.0.1", "sys": 0.00, "sysoverlap": 0.02, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 5, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2101084": {"id": 2101084, "used": 1.43, "ready": 0.06, "cstp": 0.00, "name": "vmx-vcpu-1:openmanage_enterprise.x86_64-0.0.1", "sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2101085": {"id": 2101085, "used": 2.06, "ready": 0.09, "cstp": 0.00, "name": "vmx-vcpu-2:openmanage_enterprise.x86_64-0.0.1", "sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2101086": {"id": 2101086, "used": 1.54, "ready": 0.07, "cstp": 0.00, "name": "vmx-vcpu-3:openmanage_enterprise.x86_64-0.0.1", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2101271": {"id": 2101271, "used": 0.24, "ready": 0.04, "cstp": 0.00, "name": "vmx-vcpu-0:flb-mgr", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0, "latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] }, "2101273": {"id": 2101273, "used": 0.11, "ready": 0.02, "cstp": 0.00, "name": "vmx-vcpu-1:flb-mgr", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0, "latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] }, "2101274": {"id": 2101274, "used": 0.24, "ready": 0.04, "cstp": 0.00, "name": "vmx-vcpu-0:flb-node-001", "sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 5, "miginterl3": 0, "latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] }, "2101276": {"id": 2101276, "used": 0.21, "ready": 0.02, "cstp": 0.00, "name": "vmx-vcpu-1:flb-node-001", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0, "latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] }, "2101290": {"id": 2101290, "used": 0.76, "ready": 0.07, "cstp": 0.00, "name": "vmx-vcpu-0:W2K8R2-diag02", "sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 5, "miginterl3": 0, "latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] }, "2101292": {"id": 2101292, "used": 0.19, "ready": 0.09, "cstp": 0.00, "name": "vmx-vcpu-1:W2K8R2-diag02", "sys": 0.00, "sysoverlap": 0.01, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 5, "miginterl3": 0, "latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] }, "2101746": {"id": 2101746, "used": 0.55, "ready": 0.04, "cstp": 0.00, "name": "vmx-vcpu-0:Photon-01-ch01.home.uw.cz", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 4, "miginterl3": 0, "latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] }, "2101748": {"id": 2101748, "used": 0.51, "ready": 0.06, "cstp": 0.00, "name": "vmx-vcpu-0:FreeBSD-01-is02.home.uw.cz", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0, "latencySensitivity": -3, "exclaff": -1, "relations": [], "vectors": [] }, "2104245": {"id": 2104245, "used": 11.60, "ready": 0.21, "cstp": 0.00, "name": "vmx-vcpu-0:vc01.home.uw.cz", "sys": 0.00, "sysoverlap": 0.04, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 8, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2104248": {"id": 2104248, "used": 13.66, "ready": 0.16, "cstp": 0.00, "name": "vmx-vcpu-1:vc01.home.uw.cz", "sys": 0.00, "sysoverlap": 0.05, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 6, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2104249": {"id": 2104249, "used": 12.54, "ready": 0.16, "cstp": 0.00, "name": "vmx-vcpu-2:vc01.home.uw.cz", "sys": 0.00, "sysoverlap": 0.05, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 6, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2104250": {"id": 2104250, "used": 13.03, "ready": 0.18, "cstp": 0.00, "name": "vmx-vcpu-3:vc01.home.uw.cz", "sys": 0.00, "sysoverlap": 0.05, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 6, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "3550389": {"id": 3550389, "used": 0.01, "ready": 0.00, "cstp": 0.00, "name": "net-stats", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0 } }, "sys": { "2097643": {"id": 2097643, "used": 0.19, "ready": 0.05, "cstp": 0.00, "name": "vmnic0-pollWorld-0", "sys": 0.00, "sysoverlap": 0.11, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0,
  • 23. "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097644": {"id": 2097644, "used": 0.09, "ready": 0.02, "cstp": 0.00, "name": "vmnic1-pollWorld-0", "sys": 0.00, "sysoverlap": 0.04, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097832": {"id": 2097832, "used": 0.08, "ready": 0.01, "cstp": 0.00, "name": "vmk0-rx-0", "sys": 0.00, "sysoverlap": 0.04, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097833": {"id": 2097833, "used": 0.05, "ready": 0.02, "cstp": 0.00, "name": "vmk0-tx", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097834": {"id": 2097834, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk1-rx-0", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097835": {"id": 2097835, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk1-tx", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097836": {"id": 2097836, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk2-rx-0", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097837": {"id": 2097837, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk2-tx", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097838": {"id": 2097838, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk3-rx-0", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097839": {"id": 2097839, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk3-tx", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097840": {"id": 2097840, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk4-rx-0", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097841": {"id": 2097841, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "vmk4-tx", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097842": {"id": 2097842, "used": 0.11, "ready": 0.02, "cstp": 0.00, "name": "vmk5-rx-0", "sys": 0.00, "sysoverlap": 0.07, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097843": {"id": 2097843, "used": 0.08, "ready": 0.03, "cstp": 0.00, "name": "vmk5-tx", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 3, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097863": {"id": 2097863, "used": 0.05, "ready": 0.04, "cstp": 0.00, "name": "hclk-sched-vmnic0", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 6, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097864": {"id": 2097864, "used": 0.00, "ready": 0.01, "cstp": 0.00, "name": "hclk-watchdog-vmnic0", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2097867": {"id": 2097867, "used": 0.02, "ready": 0.02, "cstp": 0.00, "name": "hclk-sched-vmnic1", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 2, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2097868": {"id": 2097868, "used": 0.00, "ready": 0.01, "cstp": 0.00, "name": "hclk-watchdog-vmnic1", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 1, "miginterl3": 0, "latencySensitivity": 0, "exclaff": -1, "relations": [], "vectors": [] }, "2101283": {"id": 2101283, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101282", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2101072": {"id": 2101072, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101071", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2101254": {"id": 2101254, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101253", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2101257": {"id": 2101257, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101256", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2101595": {"id": 2101595, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101594", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2101597": {"id": 2101597, "used": 0.00, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2101596", "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] }, "2104032": {"id": 2104032, "used": 0.02, "ready": 0.00, "cstp": 0.00, "name": "NetWorld-VM-2104031",
  • 24. "sys": 0.00, "sysoverlap": 0.00, "limited": 0.00, "vmkcall": 0.00, "actnot": 0.00, "migtot": 0, "miginterl3": 0, "latencySensitivity": -6, "exclaff": -1, "relations": [], "vectors": [] } }, "cpu": { "topology": { "core": 2, "llc": 12, "package": 12}, "used": [ 9.26, 5.53, 3.48, 6.41, 2.46, 6.57, 5.46, 4.98, 7.04, 3.11, 4.47, 5.17, 63.92], "wdt": [ 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00], "sys": [ 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00], "vcpu": [ 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00]}, "overhead": {"vcpu": [ "3550389" ] } } ] } Now we have to check what kind of threads are these ones {"name": "vmnic0", "switch": "DvsPortset-1", "id": 33554434, "mac": "90:b1:1c:13:fc:14", "rxmode": 0, "tunemode": 2, "uplink": "true", "ens": false, "txpps": 79, "txmbps": 0.9, "txsize": 1446, "txeps": 0.00, "rxpps": 80, "rxmbps": 1.2, "rxsize": 1881, "rxeps": 0.00, "vmnic": {"devname": "vmnic0.ntg3", "txpps": 79, "txmbps": 0.9, "txsize": 1500, "txeps": 0.00, "rxpps": 80, "rxmbps": 1.3, "rxsize": 1959, "rxeps": 0.00 }, "sys": [ "2097643", "2097863", "2097864" ]}, Use VSISH [root@esx21:~] vsish and display names of particular threads to identify if it is a network pool. In the output below we see only one network pool used for vmnic0. /> cat /world/2097643/name vmnic0-pollWorld-0 /> cat /world/2097863/name hclk-sched-vmnic0 /> cat /world/2097864/name hclk-watchdog-vmnic0 Here we see only one poolWorld (Rx Thread) because physical NIC does not support VMDq/NetQueue. If the NetQueue feature works properly we should see multiple software threads for VMNIC having multiple hardware queues. This is the output from ESXi host having Intel X710, where we have 16 NetPools (RX Threads). [root@czchoes595:~] vsish /> cat /world/2098742/name vmnic1-pollWorld-0 /> cat /world/2098743/name vmnic1-pollWorld-1 /> cat /world/2098744/name vmnic1-pollWorld-2 /> cat /world/2098744/name vmnic1-pollWorld-2 /> cat /world/2098745/name vmnic1-pollWorld-3 /> cat /world/2098746/name vmnic1-pollWorld-4 /> cat /world/2098747/name vmnic1-pollWorld-5
  • 25. /> cat /world/2098748/name vmnic1-pollWorld-6 /> cat /world/2098749/name vmnic1-pollWorld-7 /> cat /world/2098750/name vmnic1-pollWorld-8 /> cat /world/2098751/name vmnic1-pollWorld-9 /> cat /world/2098752/name vmnic1-pollWorld-10 /> cat /world/2098753/name vmnic1-pollWorld-11 /> cat /world/2098754/name vmnic1-pollWorld-12 /> cat /world/2098755/name vmnic1-pollWorld-13 /> cat /world/2098756/name vmnic1-pollWorld-14 /> cat /world/2098757/name vmnic1-pollWorld-15 /> cat /world/2099014/name hclk-sched-vmnic1 /> cat /world/2099015/name hclk-watchdog-vmnic1
  • 26. Diagnostic commands In this section we will document diagnostic commands which should be run on each system to understand implementation details of NIC offload capabilities and network traffic queueing. ESXCLI commands are available at ESXCLI documentation: https://code.vmware.com/docs/11743/esxi-7-0-esxcli-command- reference/namespace/esxcli_network.html For further detail about diagnostic commands, you can watch vmkernel log during execution of commands below as there can be interesting outputs from NIC driver. tail -f /var/log/vmkernel.log
  • 27. Intel X710 diagnostic command outputs ESXi Inventory - Intel details Collect hardware and ESXi inventory details. esxcli system version get esxcli hardware platform get esxcli hardware cpu global get smbiosDump WebBrowser https://192.168.4.121/cgi-bin/esxcfg-info.cgi HPE ProLiant DL560 Gen10 | BIOS: U34 | Date (ISO-8601): 2020-04-08 VMware ESXi 6.7.0 build-16075168 (6.7 U3) NIC Model: Intel(R) Ethernet Controller X710 for 10GbE SFP+ 2 NICs (vmnic1, vmnic3) in UP state
  • 28. Driver information NIC inventory esxcli network nic get -n <VMNIC> [root@czchoes595:~] esxcli network nic get -n vmnic1 Advertised Auto Negotiation: true Advertised Link Modes: Auto, 10000BaseSR/Full Auto Negotiation: true Cable Type: FIBRE Current Message Level: 0 Driver Info: Bus Info: 0000:11:00:0 Driver: i40en Firmware Version: 10.51.5 Version: 1.9.5 Link Detected: true Link Status: Up Name: vmnic1 PHYAddress: 0 Pause Autonegotiate: false Pause RX: false Pause TX: false Supported Ports: FIBRE Supports Auto Negotiation: true Supports Pause: true Supports Wakeon: false Transceiver: Virtual Address: 00:50:56:57:f7:b4 Wakeon: None NIC device info vmkchdev –l | grep vmnic VID DID SVID SDID 8086 1572 103c 22fd To list all vib modules and understand what drivers are “Inbox” (aka native VMware) or “Async” (from partners like Intel or Marvel/QLogic) esxcli software vib list [root@czchoes595:~] esxcli software vib list Name Version Vendor Acceptance Level Install Date ----------------------------- ---------------------------------- --------- ---------------- ------------ lsi-mr3 7.706.08.00-1OEM.670.0.0.8169922 Avago VMwareCertified 2020-09-15 bnxtnet 214.0.230.0-1OEM.670.0.0.8169922 BCM VMwareCertified 2020-09-15 bnxtroce 214.0.187.0-1OEM.670.0.0.8169922 BCM VMwareCertified 2020-09-15 elx-esx-libelxima-8169922.so 12.0.1188.0-03 ELX VMwareCertified 2020-09-15 brcmfcoe 12.0.1278.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15 elxiscsi 12.0.1188.0-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15 elxnet 12.0.1216.4-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15 lpfc 12.4.270.6-1OEM.670.0.0.8169922 EMU VMwareCertified 2020-09-15 amsd 670.11.5.0-16.7535516 HPE PartnerSupported 2020-09-15 bootcfg 6.7.0.02-06.00.14.7535516 HPE PartnerSupported 2020-09-15 conrep 6.7.0.03-04.00.34.7535516 HPE PartnerSupported 2020-09-15 cru 670.6.7.10.14-1OEM.670.0.0.7535516 HPE PartnerSupported 2020-09-15 fc-enablement 670.3.50.16-7535516 HPE PartnerSupported 2020-09-15
  • 29. hponcfg 6.7.0.5.5-0.18.7535516 HPE PartnerSupported 2020-09-15 ilo 670.10.2.0.2-1OEM.670.0.0.7535516 HPE PartnerSupported 2020-09-15 oem-build 670.U3.10.5.5-7535516 HPE PartnerSupported 2020-09-15 scsi-hpdsa 5.5.0.68-1OEM.550.0.0.1331820 HPE PartnerSupported 2020-09-15 smx-provider 670.03.16.00.3-7535516 HPE VMwareAccepted 2020-09-15 ssacli 4.17.6.0-6.7.0.7535516.hpe HPE PartnerSupported 2020-09-15 sut 6.7.0.2.5.0.0-83 HPE PartnerSupported 2020-09-15 testevent 6.7.0.02-01.00.12.7535516 HPE PartnerSupported 2020-09-15 i40en 1.9.5-1OEM.670.0.0.8169922 INT VMwareCertified 2020-09-15 igbn 1.4.10-1OEM.670.0.0.8169922 INT VMwareCertified 2020-09-15 ixgben 1.7.20-1OEM.670.0.0.8169922 INT VMwareCertified 2020-09-15 nmlx5-core 4.17.15.16-1OEM.670.0.0.8169922 MEL VMwareCertified 2020-09-15 nmlx5-rdma 4.17.15.16-1OEM.670.0.0.8169922 MEL VMwareCertified 2020-09-15 nmst 4.12.0.105-1OEM.650.0.0.4598673 MEL PartnerSupported 2020-09-15 smartpqi 1.0.4.3008-1OEM.670.0.0.8169922 MSCC VMwareCertified 2020-09-15 nhpsa 2.0.44-1OEM.670.0.0.8169922 Microsemi VMwareCertified 2020-09-15 qcnic 1.0.27.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15 qedentv 3.11.16.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15 qedf 1.3.41.0-1OEM.600.0.0.2768847 QLC VMwareCertified 2020-09-15 qedi 2.10.19.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15 qedrntv 3.11.16.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15 qfle3 1.0.87.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15 qfle3f 1.0.75.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15 qfle3i 1.0.25.0-1OEM.670.0.0.8169922 QLC VMwareCertified 2020-09-15 qlnativefc 2.1.81.0-1OEM.600.0.0.2768847 QLogic VMwareCertified 2020-09-16 ata-libata-92 3.00.9.2-16vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ata-pata-amd 0.3.10-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ata-pata-atiixp 0.4.6-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ata-pata-cmd64x 0.2.5-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ata-pata-hpt3x2n 0.3.4-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ata-pata-pdc2027x 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ata-pata-serverworks 0.4.3-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ata-pata-sil680 0.4.8-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ata-pata-via 0.3.3-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 block-cciss 3.6.14-10vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 char-random 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ehci-ehci-hcd 1.0-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 hid-hid 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 iavmd 1.2.0.1011-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ima-qla4xxx 2.02.18-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 ipmi-ipmi-devintf 39.1-5vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15 ipmi-ipmi-msghandler 39.1-5vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15 ipmi-ipmi-si-drv 39.1-5vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15 iser 1.0.0.0-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15 lpnic 11.4.59.0-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 lsi-msgpt2 20.00.06.00-2vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15 lsi-msgpt35 09.00.00.00-5vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15 lsi-msgpt3 17.00.02.00-1vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15 misc-drivers 6.7.0-2.48.13006603 VMW VMwareCertified 2020-09-15 mtip32xx-native 3.9.8-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15 ne1000 0.8.4-2vmw.670.2.48.13006603 VMW VMwareCertified 2020-09-15 nenic 1.0.29.0-1vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15 net-cdc-ether 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-e1000 8.0.3.1-5vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-e1000e 3.2.2.1-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-enic 2.1.2.38-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-fcoe 1.0.29.9.3-7vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-forcedeth 0.61-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-libfcoe-92 1.0.24.9.4-8vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-mlx4-core 1.9.7.0-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-mlx4-en 1.9.7.0-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-nx-nic 5.0.621-5vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-tg3 3.131d.v60.4-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-usbnet 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 net-vmxnet3 1.1.3.0-3vmw.670.3.104.16075168 VMW VMwareCertified 2020-09-16 nfnic 4.0.0.44-0vmw.670.3.104.16075168 VMW VMwareCertified 2020-09-16 nmlx4-core 3.17.13.1-1vmw.670.2.48.13006603 VMW VMwareCertified 2020-09-15 nmlx4-en 3.17.13.1-1vmw.670.2.48.13006603 VMW VMwareCertified 2020-09-15 nmlx4-rdma 3.17.13.1-1vmw.670.2.48.13006603 VMW VMwareCertified 2020-09-15 ntg3 4.1.3.2-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15 nvme 1.2.2.28-1vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15 nvmxnet3-ens 2.0.0.21-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 nvmxnet3 2.0.0.29-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15 ohci-usb-ohci 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 pvscsi 0.1-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 qflge 1.1.0.11-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 sata-ahci 3.0-26vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 sata-ata-piix 2.12-10vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 sata-sata-nv 3.5-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 sata-sata-promise 2.12-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 sata-sata-sil24 1.1-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 sata-sata-sil 2.3-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 sata-sata-svw 2.3-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-aacraid 1.1.5.1-9vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-adp94xx 1.0.8.12-6vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-aic79xx 3.1-6vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-fnic 1.5.0.45-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15
  • 30. scsi-ips 7.12.05-4vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-iscsi-linux-92 1.0.0.2-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-libfc-92 1.0.40.9.3-5vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-megaraid-mbox 2.20.5.1-6vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-megaraid-sas 6.603.55.00-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-megaraid2 2.00.4-9vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-mpt2sas 19.00.00.00-2vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-mptsas 4.23.01.00-10vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-mptspi 4.23.01.00-10vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 scsi-qla4xxx 5.01.03.2-7vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 sfvmk 1.0.0.1003-7vmw.670.3.104.16075168 VMW VMwareCertified 2020-09-16 shim-iscsi-linux-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-iscsi-linux-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-libata-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-libata-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-libfc-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-libfc-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-libfcoe-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-libfcoe-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-vmklinux-9-2-1-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-vmklinux-9-2-2-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 shim-vmklinux-9-2-3-0 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 uhci-usb-uhci 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 usb-storage-usb-storage 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 usbcore-usb 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 vmkata 0.1-1vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 vmkfcoe 1.0.0.1-1vmw.670.1.28.10302608 VMW VMwareCertified 2020-09-15 vmkplexer-vmkplexer 6.7.0-0.0.8169922 VMW VMwareCertified 2020-09-15 vmkusb 0.1-1vmw.670.3.104.16075168 VMW VMwareCertified 2020-09-16 vmw-ahci 1.2.8-1vmw.670.3.73.14320388 VMW VMwareCertified 2020-09-15 xhci-xhci 1.0-3vmw.670.0.0.8169922 VMW VMwareCertified 2020-09-15 cpu-microcode 6.7.0-3.77.15018017 VMware VMwareCertified 2020-09-15 esx-base 6.7.0-3.104.16075168 VMware VMwareCertified 2020-09-16 esx-dvfilter-generic-fastpath 6.7.0-0.0.8169922 VMware VMwareCertified 2020-09-15 esx-ui 1.33.7-15803439 VMware VMwareCertified 2020-09-16 esx-update 6.7.0-3.104.16075168 VMware VMwareCertified 2020-09-16 esx-xserver 6.7.0-3.73.14320388 VMware VMwareCertified 2020-09-15 lsu-hp-hpsa-plugin 2.0.0-16vmw.670.1.28.10302608 VMware VMwareCertified 2020-09-15 lsu-intel-vmd-plugin 1.0.0-2vmw.670.1.28.10302608 VMware VMwareCertified 2020-09-15 lsu-lsi-drivers-plugin 1.0.0-1vmw.670.2.48.13006603 VMware VMwareCertified 2020-09-15 lsu-lsi-lsi-mr3-plugin 1.0.0-13vmw.670.1.28.10302608 VMware VMwareCertified 2020-09-15 lsu-lsi-lsi-msgpt3-plugin 1.0.0-9vmw.670.2.48.13006603 VMware VMwareCertified 2020-09-15 lsu-lsi-megaraid-sas-plugin 1.0.0-9vmw.670.0.0.8169922 VMware VMwareCertified 2020-09-15 lsu-lsi-mpt2sas-plugin 2.0.0-7vmw.670.0.0.8169922 VMware VMwareCertified 2020-09-15 lsu-smartpqi-plugin 1.0.0-3vmw.670.1.28.10302608 VMware VMwareCertified 2020-09-15 native-misc-drivers 6.7.0-3.89.15160138 VMware VMwareCertified 2020-09-15 rste 2.0.2.0088-7vmw.670.0.0.8169922 VMware VMwareCertified 2020-09-15 vmware-esx-esxcli-nvme-plugin 1.2.0.36-2.48.13006603 VMware VMwareCertified 2020-09-15 vmware-fdm 6.7.0-16708996 VMware VMwareCertified 2020-09-20 vsan 6.7.0-3.104.15985001 VMware VMwareCertified 2020-09-16 vsanhealth 6.7.0-3.104.15984994 VMware VMwareCertified 2020-09-16 tools-light 11.0.5.15389592-15999342 VMware VMwareCertified 2020-09-16 Driver module settings Identify NIC driver module name esxcli network nic get -n vmnic0 Show driver module parameters esxcli system module parameters list -m <DRIVER-MODULE-NAME> [root@czchoes595:~] esxcli system module parameters list -m i40en Name Type Value Description ------------- ------------ ----- ---------------------------------------------------------------------------------- EEE array of int Energy Efficient Ethernet feature (EEE): 0 = disable, 1 = enable, (default = 1) LLDP array of int Link Layer Discovery Protocol (LLDP) agent: 0 = disable, 1 = enable, (default = 1) RxITR int Default RX interrupt interval (0..0xFFF), in microseconds (default = 50) TxITR int Default TX interrupt interval (0..0xFFF), in microseconds, (default = 100) VMDQ array of int Number of Virtual Machine Device Queues: 0/1 = disable, 2-16 enable (default =8) max_vfs array of int Maximum number of VFs to be enabled (0..128) trust_all_vfs array of int Always set all VFs to trusted mode 0 = disable (default), other = enable
  • 31. TSO To verify that your pNIC supports TSO and if it is enabled on your ESXi host esxcli network nic tso get [root@czchoes595:~] esxcli network nic tso get NIC Value ------ ----- vmnic0 on vmnic1 on vmnic2 on vmnic3 on LRO To display the current LRO configuration values esxcli system settings advanced list -o /Net/TcpipDefLROEnabled [root@czchoes595:~] esxcli system settings advanced list -o /Net/TcpipDefLROEnabled Path: /Net/TcpipDefLROEnabled Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: LRO enabled for TCP/IP Check the length of the LRO buffer by using the following esxcli command: esxcli system settings advanced list - o /Net/VmxnetLROMaxLength [root@czchoes595:~] esxcli system settings advanced list -o /Net/VmxnetLROMaxLength Path: /Net/VmxnetLROMaxLength Type: integer Int Value: 32000 Default Int Value: 32000 Min Value: 1 Max Value: 65535 String Value: Default String Value: Valid Characters: Description: LRO default max length for TCP/IP To check the VMXNET3 settings in relation to LRO, the following commands (hardware LRO, software LRO) can be issued: esxcli system settings advanced list -o /Net/Vmxnet3HwLRO [root@czchoes595:~] esxcli system settings advanced list -o /Net/Vmxnet3HwLRO Path: /Net/Vmxnet3HwLRO
  • 32. Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: Whether to enable HW LRO on pkts going to a LPD capable vmxnet3 esxcli system settings advanced list -o /Net/Vmxnet3SwLRO [root@czchoes595:~] esxcli system settings advanced list -o /Net/Vmxnet3SwLRO Path: /Net/Vmxnet3SwLRO Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: Whether to perform SW LRO on pkts going to a LPD capable vmxnet3 CSO (Checksum Offload) To verify that your pNIC supports Checksum Offload (CSO) on your ESXi host esxcli network nic cso get [root@czchoes595:~] esxcli network nic cso get NIC RX Checksum Offload TX Checksum Offload ------ ------------------- ------------------- vmnic0 on on vmnic1 on on vmnic2 on on vmnic3 on on Net Queue Support Get netqueue support on VMkernel esxcli system settings kernel list | grep netNetqueueEnabled
  • 33. Net Queue Count Get netqueue count on a nic esxcli network nic queue count get [root@czchoes595:~] esxcli network nic queue count get NIC Tx netqueue count Rx netqueue count ------ ----------------- ----------------- vmnic0 0 0 vmnic1 8 8 vmnic2 0 0 vmnic3 8 8 Net Filter Classes List the netqueue supported filterclass of all physical NICs currently installed and loaded on the system. esxcli network nic queue filterclass list [root@czchoes595:~] esxcli network nic queue filterclass list NIC MacOnly VlanOnly VlanMac Vxlan Geneve GenericEncap ------ ------- -------- ------- ----- ------ ------------ vmnic0 false false false false false false vmnic1 true true true true true false vmnic2 false false false false false false vmnic3 true true true true true false List the load balancer settings List the load balancer settings of all the installed and loaded physical NICs. (S:supported, U:unsupported, N:not-applicable, A:allowed, D:disallowed). esxcli network nic queue loadbalancer list [root@czchoes595:~] esxcli network nic queue loadbalancer list NIC RxQPair RxQNoFeature PreEmptibleQ RxQLatency RxDynamicLB DynamicQPool MacLearnLB RSS LRO GeneveOAM ------ ------- ------------ ------------ ---------- ----------- ------------ ---------- --- --- --------- vmnic0 UA ND UA UA NA UA NA UA UA UA vmnic1 SA ND UA UA NA SA NA UA UA UA vmnic2 UA ND UA UA NA UA NA UA UA UA vmnic3 SA ND UA UA NA SA NA UA UA UA Details of netqueue balancer plugins Details of netqueue balancer plugins on all physical NICs currently installed and loaded on the system esxcli network nic queue loadbalancer plugin list [root@czchoes595:~] esxcli network nic queue loadbalancer plugin list NIC Module Name Plugin Name Enabled Description ------ -------------- --------------------- ------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------- vmnic0 load-based-bal filter-packer true Perform packing of filters from two queues vmnic0 load-based-bal queue-allocator true Allocates Rx queue with best feature for loaded filter vmnic0 load-based-bal filter-unpacker true Perform unpacking of filters from saturated queues vmnic0 load-based-bal filter-equalizer true Distribute filters between two queues for better fairness vmnic0 load-based-bal rssflow-mapper true Dynamically map flows in indirection table of RSS engine vmnic0 load-based-bal netpoll-affinitizer true Affinitize Rx queue netpoll to highly loaded filter vmnic0 load-based-bal geneveoam-allocator true Allocate geneve-oam queue based on applied filters
  • 34. vmnic0 load-based-bal numaaware-affinitizer true Numa aware filter placement on Rx queue and dynamically affinitize Rx queue netpoll to device numa vmnic1 load-based-bal filter-packer true Perform packing of filters from two queues vmnic1 load-based-bal queue-allocator true Allocates Rx queue with best feature for loaded filter vmnic1 load-based-bal filter-unpacker true Perform unpacking of filters from saturated queues vmnic1 load-based-bal filter-equalizer true Distribute filters between two queues for better fairness vmnic1 load-based-bal rssflow-mapper true Dynamically map flows in indirection table of RSS engine vmnic1 load-based-bal netpoll-affinitizer true Affinitize Rx queue netpoll to highly loaded filter vmnic1 load-based-bal geneveoam-allocator true Allocate geneve-oam queue based on applied filters vmnic1 load-based-bal numaaware-affinitizer true Numa aware filter placement on Rx queue and dynamically affinitize Rx queue netpoll to device numa vmnic2 load-based-bal filter-packer true Perform packing of filters from two queues vmnic2 load-based-bal queue-allocator true Allocates Rx queue with best feature for loaded filter vmnic2 load-based-bal filter-unpacker true Perform unpacking of filters from saturated queues vmnic2 load-based-bal filter-equalizer true Distribute filters between two queues for better fairness vmnic2 load-based-bal rssflow-mapper true Dynamically map flows in indirection table of RSS engine vmnic2 load-based-bal netpoll-affinitizer true Affinitize Rx queue netpoll to highly loaded filter vmnic2 load-based-bal geneveoam-allocator true Allocate geneve-oam queue based on applied filters vmnic2 load-based-bal numaaware-affinitizer true Numa aware filter placement on Rx queue and dynamically affinitize Rx queue netpoll to device numa vmnic3 load-based-bal filter-packer true Perform packing of filters from two queues vmnic3 load-based-bal queue-allocator true Allocates Rx queue with best feature for loaded filter vmnic3 load-based-bal filter-unpacker true Perform unpacking of filters from saturated queues vmnic3 load-based-bal filter-equalizer true Distribute filters between two queues for better fairness vmnic3 load-based-bal rssflow-mapper true Dynamically map flows in indirection table of RSS engine vmnic3 load-based-bal netpoll-affinitizer true Affinitize Rx queue netpoll to highly loaded filter vmnic3 load-based-bal geneveoam-allocator true Allocate geneve-oam queue based on applied filters vmnic3 load-based-bal numaaware-affinitizer true Numa aware filter placement on Rx queue and dynamically affinitize Rx queue netpoll to device numa Net Queue balancer state Netqueue balancer state of all physical NICs currently installed and loaded on the system esxcli network nic queue loadbalancer state list [root@czchoes595:~] esxcli network nic queue loadbalancer state list NIC Enabled ------ ------- vmnic0 true vmnic1 true vmnic2 true vmnic3 true RX/TX ring buffer current parameters Get current RX/TX ring buffer parameters of a NIC esxcli network nic ring current get [root@czchoes595:~] esxcli network nic ring current get -n vmnic0 RX: 1024 RX Mini: 0 RX Jumbo: 0 TX: 1024 RX/TX ring buffer parameters max values Get preset maximums for RX/TX ring buffer parameters of a NIC. esxcli network nic ring preset get -n vmnic0 [root@czchoes595:~] esxcli network nic ring preset get -n vmnic0 RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096
  • 35. SG (Scatter and Gather) Scatter and Gather (Vectored I/O) is a concept that was primarily used in hard disks and it enhances large I/O request performance, if supported by the hardware. esxcli network nic sg get [root@czchoes595:~] esxcli network nic sg get NIC Value ------ ----- vmnic0 on vmnic1 on vmnic2 on vmnic3 on List software simulation settings List software simulation settings of physical NICs currently installed and loaded on the system. esxcli network nic software list [root@czchoes595:~] esxcli network nic software list NIC IPv4 CSO IPv4 TSO Scatter Gather Offset Based Offload VXLAN Encap Geneve Offload IPv6 TSO IPv6 TSO Ext IPv6 CSO IPv6 CSO Ext High DMA Scatter Gather MP VLAN Tagging VLAN Untagging ------ -------- -------- -------------- -------------------- ----------- -------------- -------- ------------ -------- ---------- -- -------- ----------------- ------------ -------------- vmnic0 off off off off off off off off off off off off off off vmnic1 off off off off off off off off off off off off off off vmnic2 off off off off off off off off off off off off off off vmnic3 off off off off off off off off off off off off off off [root@czchoes595:~] RSS We do not see any RSS related driver parameters, therefore, driver i40en 1.9.5 does not support RSS. On top of that, we have been assured by VMware Engineering that inbox driver i40en 1.9.5 does not support RSS.
  • 36. VMkernel software treads per VMNIC Show number of VMkernel software treads per VMNIC net-stats -A -t vW vsish /> cat /world/<WORLD-ID-1-IN-VMNIC>/name /> cat /world/<WORLD-ID-2-IN-VMNIC>/name /> cat /world/<WORLD-ID-3-IN-VMNIC>/name … /> cat /world/<WORLD-ID-n-IN-VMNIC>/name VMNIC1 [root@czchoes595:~] vsish /> cat /world/2098742/name vmnic1-pollWorld-0 /> cat /world/2098743/name vmnic1-pollWorld-1 /> cat /world/2098744/name vmnic1-pollWorld-2 /> cat /world/2098744/name vmnic1-pollWorld-2 /> cat /world/2098745/name vmnic1-pollWorld-3 /> cat /world/2098746/name vmnic1-pollWorld-4 /> cat /world/2098747/name vmnic1-pollWorld-5 /> cat /world/2098748/name vmnic1-pollWorld-6 /> cat /world/2098749/name vmnic1-pollWorld-7 /> cat /world/2098750/name vmnic1-pollWorld-8 /> cat /world/2098751/name vmnic1-pollWorld-9 /> cat /world/2098752/name vmnic1-pollWorld-10 /> cat /world/2098753/name vmnic1-pollWorld-11 /> cat /world/2098754/name vmnic1-pollWorld-12 /> cat /world/2098755/name vmnic1-pollWorld-13 /> cat /world/2098756/name vmnic1-pollWorld-14 /> cat /world/2098757/name vmnic1-pollWorld-15 /> cat /world/2099014/name hclk-sched-vmnic1 /> cat /world/2099015/name hclk-watchdog-vmnic1
  • 37. VMNIC3 /> cat /world/2098789/name vmnic3-pollWorld-0 /> cat /world/2098790/name vmnic3-pollWorld-1 /> cat /world/2098791/name vmnic3-pollWorld-2 /> cat /world/2098792/name vmnic3-pollWorld-3 /> cat /world/2098793/name vmnic3-pollWorld-4 /> cat /world/2098794/name vmnic3-pollWorld-5 /> cat /world/2098795/name vmnic3-pollWorld-6 /> cat /world/2098796/name vmnic3-pollWorld-7 /> cat /world/2098797/name vmnic3-pollWorld-8 /> cat /world/2098798/name vmnic3-pollWorld-9 /> cat /world/2098799/name vmnic3-pollWorld-10 /> cat /world/2098800/name vmnic3-pollWorld-11 /> cat /world/2098801/name vmnic3-pollWorld-12 /> cat /world/2098802/name vmnic3-pollWorld-13 /> cat /world/2098803/name vmnic3-pollWorld-14 /> cat /world/2098804/name vmnic3-pollWorld-15 /> cat /world/2099003/name hclk-sched-vmnic3 /> cat /world/2099004/name hclk-watchdog-vmnic3