SR-IOV Performance on Centos7 VM

This blog is to demonstrate network performance (network throughput here only) for a SR-IOV enabled Centos7 virtual machine which is running on vSphere 6. Regarding the vSphere 6.5 support to SR-IOV, please refer the link below:

Single Root I/O Virtualization

My testing environment is on IBM Cloud:

Virtual machine specification:

  • 4 vCPU/16G Memory;
  • OS: Centos Linux release 7.4.1708 (core)
  • Reserve All Guest Memory (this is a mandatory requirment for SR-IOV but I enable it for all testing VMs)

ESXi hosts: we use 2 ESX hosts (host10 and host11) for our testing.  SR-IOV is enabled on a 10G NIC

  • Supermicro PIO-618U-T4T+-ST031,
  • Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
  • 512GB Memory

Host10

ESXi host specification

Host11

ESXi host specification-host11

Testing Tool: IPerf3 version 3.1.3. Default setting is used.

Note: I have only 4 VMs running on 2 vSphere ESXi hosts in my testing environment to remove the impact for resource contention. In addition, all 4 VMs are in the same layer 2 network to remove any potential bottleneck when perform the network throughput testing using IPerf3.

SR-IOV01

Virtual Machine1 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi Host:  host10

[root@networktest0 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)
03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest0 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

NetworkTest0

Virtual Machine2 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi host: host11

[root@networktest1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest1 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: ye

NetworkTest1

Virtual Machine3 (SR-IOV enabled)

  • Hostname: srIOV 
  • IP Address: 10.139.36.180
  • ESXi host: host10

[root@sriov ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)—–same as the ethernet controller (X540-AT2) of vSphere ESXi host

[root@sriov ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

srIOV

Virtual Machine4 (SR-IOV enabled)

  • Hostname: srIOV1
  • IP Address: 10.139.36.181
  • ESXi host: host11

[root@sriov1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)
[root@sriov1 ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
srIOV1

Test1: from Virtual Machine1 to Virtual Machine2:

[root@networktest0 ~]# iperf3 -c 10.139.36.179 -t 300

[ 4] 290.00-291.00 sec 809 MBytes 6.79 Gbits/sec 29 725 KBytes
[ 4] 291.00-292.00 sec 802 MBytes 6.72 Gbits/sec 32 680 KBytes
[ 4] 292.00-293.00 sec 631 MBytes 5.30 Gbits/sec 52 711 KBytes
[ 4] 293.00-294.00 sec 773 MBytes 6.48 Gbits/sec 9 902 KBytes
[ 4] 294.00-295.00 sec 800 MBytes 6.71 Gbits/sec 27 856 KBytes
[ 4] 295.00-296.00 sec 801 MBytes 6.72 Gbits/sec 36 790 KBytes
[ 4] 296.00-297.00 sec 774 MBytes 6.49 Gbits/sec 52 694 KBytes
[ 4] 297.00-298.00 sec 815 MBytes 6.83 Gbits/sec 30 656 KBytes
[ 4] 298.00-299.00 sec 649 MBytes 5.45 Gbits/sec 35 689 KBytes
[ 4] 299.00-300.00 sec 644 MBytes 5.40 Gbits/sec 57 734 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec 10797 sender
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec receiver

iperf Done.
[root@networktest0 ~]#

Test2: from Virtual Machine2 to Virtual Machine1

[root@networktest1 ~]# iperf3 -c 10.139.36.178 -t 300
Connecting to host 10.139.36.178, port 5201
[ 4] local 10.139.36.179 port 54844 connected to 10.139.36.178 port 5201

[ 4] 290.00-291.00 sec 794 MBytes 6.66 Gbits/sec 6 908 KBytes
[ 4] 291.00-292.00 sec 811 MBytes 6.80 Gbits/sec 8 871 KBytes
[ 4] 292.00-293.00 sec 810 MBytes 6.80 Gbits/sec 10 853 KBytes
[ 4] 293.00-294.00 sec 810 MBytes 6.79 Gbits/sec 12 819 KBytes
[ 4] 294.00-295.00 sec 811 MBytes 6.80 Gbits/sec 19 783 KBytes
[ 4] 295.00-296.00 sec 810 MBytes 6.79 Gbits/sec 14 747 KBytes
[ 4] 296.00-297.00 sec 776 MBytes 6.51 Gbits/sec 9 639 KBytes
[ 4] 297.00-298.00 sec 778 MBytes 6.52 Gbits/sec 7 874 KBytes
[ 4] 298.00-299.00 sec 809 MBytes 6.78 Gbits/sec 13 851 KBytes
[ 4] 299.00-300.00 sec 810 MBytes 6.80 Gbits/sec 11 810 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec 4269 sender
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec receiver

iperf Done.

Test3:  from Virtual Machine3 to Virtual Machine4

[root@sriov ~]# iperf3 -c 10.139.36.181 -t 300 -V
iperf 3.1.3
Linux sriov 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:15:18 GMT
Connecting to host 10.139.36.181, port 5201
Cookie: sriov.1511072118.047298.4aefd6730c42
TCP MSS: 1448 (default)
[ 4] local 10.139.36.180 port 56330 connected to 10.139.36.181 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.09 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.10 MBytes
[ 4] 2.00-3.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.13 MBytes

[ 4] 290.00-291.00 sec 1.06 GBytes 9.14 Gbits/sec 15 1.12 MBytes
[ 4] 291.00-292.00 sec 1.06 GBytes 9.09 Gbits/sec 13 928 KBytes
[ 4] 292.00-293.00 sec 1.05 GBytes 9.00 Gbits/sec 26 1003 KBytes
[ 4] 293.00-294.00 sec 1.07 GBytes 9.22 Gbits/sec 115 1.06 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.23 MBytes
[ 4] 295.00-296.00 sec 1.06 GBytes 9.10 Gbits/sec 79 942 KBytes
[ 4] 296.00-297.00 sec 1.05 GBytes 9.03 Gbits/sec 29 1.02 MBytes
[ 4] 297.00-298.00 sec 1.08 GBytes 9.25 Gbits/sec 6 1005 KBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1005 KBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1005 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec 12656 sender
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec receiver
CPU Utilization: local/sender 13.0% (0.2%u/12.9%s), remote/receiver 41.5% (1.1%u/40.4%s)

iperf Done.

Test4:  from Virtual Machine4 to Virtual Machine3

[root@sriov1 ~]# iperf3 -c 10.139.36.180 -t 300 -V
iperf 3.1.3
Linux sriov1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:30:09 GMT
Connecting to host 10.139.36.180, port 5201
Cookie: sriov1.1511073009.840403.56876d65774
TCP MSS: 1448 (default)
[ 4] local 10.139.36.181 port 46602 connected to 10.139.36.180 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.38 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.40 MBytes

[ 4] 289.00-290.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.31 MBytes
[ 4] 290.00-291.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.31 MBytes
[ 4] 291.00-292.00 sec 1.09 GBytes 9.41 Gbits/sec 329 945 KBytes
[ 4] 292.00-293.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.09 MBytes
[ 4] 293.00-294.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 295.00-296.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.27 MBytes
[ 4] 296.00-297.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 297.00-298.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.38 MBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec 14395 sender
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec receiver
CPU Utilization: local/sender 13.9% (0.2%u/13.7%s), remote/receiver 39.6% (1.0%u/38.6%s)

iperf Done.
[root@sriov1 ~]#

We can see that SR-IOV enabled Centos7 VM can achieve ~9.3Gbits/s throughput for both inbound and outbound traffic, which is very close to wire speed forwarding for a 10G port.

Create XML file in vRealize Orchestrator for NSX Automation

NSX API uses XML format for API communication. To automate NSX in VMware vRealize Orchestror, it is always required to create a XML file with javascript  as vRO workflow supports javascript only.Here i only shows you an example to how to do it.

The target here is to create a security group and add a simple firewall rule in this newly created security group.

Note: this vRO workflow has 2 inputs:
securityGroupName, description
And 2 properties:
nsxManagerRestHost, realtime(equal to sgID in Step1)

Step1: create a security group

var xmlbody = new XML('<securitygroup />');
xmlbody.objectId = " ";
xmlbody.type.typeName = " ";
xmlbody.description = description;
xmlbody.name = securityGroupName;
xmlbody.revision = 0;
xmlbody.objectTypeName = " ";
System.log(xmlbody);
var request = nsxManagerRestHost.createRequest("POST", "/2.0/services/securitygroup/bulk/globalroot-0", xmlbody.toString());
request.contentType = "application/xml";
System.log("Creating a SecurityGroup " + securityGroupName);
System.log("POST Request URL: " + request.fullUrl);
var response = request.execute();
if (response.statusCode == 201) {
	System.debug("Successfully created Security Group " + securityGroupName);
	}
else {
	throw("Failed to SecurityGroup " + securityGroupName);
	}
sgID = response.getAllHeaders().get("Location").split('/').pop();
realtime=sgID

Step2: add a section in DFW and add a firewall rules

//create XML object for DFW source;
var rulesources = new XML('<sources excluded="false" />');
rulesources.source.name = " ";
rulesources.source.value = "10.47.161.23";
rulesources.source.type = "Ipv4Address";
rulesources.source.isValid = 'true';
System.log("Source: "+rulesources);

//create XML object for DFW destination;
var ruledestionations = new XML('<destinations excluded="false" />');
ruledestionations.destination.name = " ";
ruledestionations.destination.value = "10.47.161.24";
ruledestionations.destination.type = "Ipv4Address";
ruledestionations.destination.isValid = 'true';
System.log("Destination: "+ruledestionations);

//create XML object for DFW service
var ruleservices = new XML('<services />');
ruleservices.service.destinationPort = "80";
ruleservices.service.protocol = "6";
ruleservices.service.subProtocol = "6";
ruleservices.service.isValid = 'true';
System.log("Service: "+ruleservices);

//create XML object for the whole rule
var xmlbodyrule = new XML('<rule disabled="false" logged="true" />');
xmlbodyrule.name = "vro created rule";
xmlbodyrule.action = "allow";
xmlbodyrule.notes = " ";
xmlbodyrule.appliedToList.appliedTo.name = securityGroupName;
xmlbodyrule.appliedToList.appliedTo.value = realtime;
xmlbodyrule.appliedToList.appliedTo.type = 'SecurityGroup';
xmlbodyrule.appliedToList.appliedTo.isValid = 'true';
xmlbodyrule.sectionId = " ";
xmlbodyrule.sources = rulesources;
xmlbodyrule.destinations = ruledestionations;
xmlbodyrule.services = ruleservices;

//create XML object for DFW section
var xmlbody = new XML(
<section name ={securityGroupName} />);
//xmlbody.rule = 'disabled="false" logged="true" />';
xmlbody.rule=xmlbodyrule;
System.log("XML file for new rules: "+xmlbody);

var request = nsxManagerRestHost.createRequest("POST", "/4.0/firewall/globalroot-0/config/layer3sections", xmlbody.toString());
request.contentType = "application/xml";
var response = request.execute();
if (response.statusCode == 201) {
	System.debug("Successfully created Security Group Section" + securityGroupName);
	}
else {
	throw("Failed to SecurityGroup Section" + securityGroupName);
	}

Below is the output of XML file for creating a security group:

<securitygroup>
  <objectId></objectId>
  <type>
    <typeName></typeName>
  </type>
  <description>nsx1001test</description>
  <name>nsx1001test</name>
  <revision>0</revision>
  <objectTypeName></objectTypeName>
</securitygroup>

XML file for creating a NSX DFW section and adding a new simple firewall rules:

<section name="nsx1001test">
  <rule disabled="false" logged="true">
    <name>vro created rule</name>
    <action>allow</action>
    <notes></notes>
    <appliedToList>
      <appliedTo>
        <name>nsx1001test</name>
        <value>securitygroup-947</value>
        <type>SecurityGroup</type>
        <isValid>true</isValid>
      </appliedTo>
    </appliedToList>
    <sectionId></sectionId>
    <sources excluded="false">
      <source>
        <name></name>
        <value>10.47.161.23</value>
        <type>Ipv4Address</type>
        <isValid>true</isValid>
      </source>
    </sources>
    <destinations excluded="false">
      <destination>
        <name></name>
        <value>10.47.161.24</value>
        <type>Ipv4Address</type>
        <isValid>true</isValid>
      </destination>
    </destinations>
    <services>
      <service>
        <destinationPort>80</destinationPort>
        <protocol>6</protocol>
        <subProtocol>6</subProtocol>
        <isValid>true</isValid>
      </service>
    </services>
  </rule>
</section>

NSX Load Balancer Qucik Summary

Recently, I was asked a lot of questions around the capability of NSX load balancer by team and customers. So I put a quick summary of NSX load balancer to ease my life.

NSX can perform L4 or L7 load balancing:

  • L4 Load Balancing (packet-based load balancing) : support TCP and UDP load balancing, which is based on Linux Virtual Server.
  • L7 Load Balancing (socket-based load balancing): Support TCP and TCP-based application (e.g. HTTPs_ load balancing, which is based on HAProxy.

Regarding SSL load balancing, it requests L7 load balancing.

3 options for SSL load balancing:

  • SSL Passthrough:
    • NSX load balancer won’t terminate the client session and only pass through the SSL traffic;
    • Session persistence: SSL session id or source IP
  • SSL Offload:
    • client SSL session will be terminated on NSX load balancer and a clear-text (e.g. HTTP) session will be initiated from NSX load balancer to backend server;
    • Session persistence: cookie, SSL session id or source IP
  • SSL end to end:
    • client SSL session will be terminated on NSX load balancer and a new SSL session will be initiated from NSX load balancer to backend server;
    • Session persistence: cookie, SSL session id or source IP

Tips:

  1. L4 and L7 virtual server can co-exist on the same NSX load balancer;
  2. NSX load balancer can use 1 or multiple security groups as pool member, which means Virtual machines will be added into the load balancing pool automatically if they are added into right security group; This feature is especially useful when your Cloud VM is re-provisioned and its IP is changed;
  3. Transparent mode load balancing is not recommended due to the complexity and potential performance issue;
  4. In proxy mode, you can try to use HTTP x-forwarded-for to maintain the source IP information in the request;

Limitation and Constraints:

  1. Don’t supprt the integration with HSM;
  2. As NSX load balancer use the secondary IPs of vNIC, the size of virtual IP can’t scale up well;
  3. Lack of fine security control for traffic to virtual server;
  4. NSX can’t provide good service monitoring like F5 BIGIP or Citrix Netscaler;

 

New Ansible F5 HTTPs Health Monitor Module

Just got time this weekend to test the newly released dev version of Ansible F5 HTTPs health monitor. The result of testing looks good: most of common use cases have been covered properly.

Below is my first playbook for my testing:

# This version is to create a new https health monitor
---
- name: f5 config
  hosts:  lb.davidwzhang.com
  connection: local
  vars:
    ports:
      - 443
  tasks:
    - name: creat http healthmonitor
      bigip_monitor_https:
        state:  "present"
        #state: "absent"
        name: "ansible-httpshealthmonitor"
        password: "password"
        server: "10.1.1.122"
        user: "admin"
        validate_certs: "no"
        send: "Get /cgi-bin/env.sh HTTP/1.1\r\nHost:192.168.72.28\r\nConnection: Close\r\n"
        receive: "web"
        interval: "3"
        timeout: "10"
      delegate_to:  localhost

After run the playbook, I log in my F5 BIGIP VE and see the https health monitor has been created successfully.
f5 https healthmonitor

I tried to create another HTTPs health monitor, which includes basic authentication(admin/password) and customized alias address and alias service port(8443).
Playbook:

# This version is to create a new HTTP health monitor
---
- name: f5 config
  hosts:  lb.davidwzhang.com
  connection: local
  vars:
    ports:
      - 443
  tasks:
    - name: creat http healthmonitor
      bigip_monitor_https:
        state:  "present"
        #state: "absent"
        name: "ansible-httpshealthmonitor02"
        password: "password"
        server: "10.1.1.122"
        user: "admin"
        validate_certs: "no"
        ip: "192.168.72.128"
        port: "8443"
        send: "Get /cgi-bin/env.sh\r\n"
        receive: "200"
        interval: "3"
        timeout: "10"
        target_username: "admin"
        target_password: "password"
      delegate_to:  localhost

In F5, you can see the below:
f5 https healthmonitor02

In addition, you possibly noticed that I comment a line in the above 2 playbooks:

#state: "absent"

You can use it to remove the health monitor.

vRA7.3 and NSX Integration: Network Security Data Collection Failure

We are building vRA 7.3 . We added vCenter and NSX manager as endpoint in vRA. And associate NSX manager with vCenter. All of computing resource data collection works well but not NSX (network and security):

So in vRA reservation, we only can see vSphere cluster, vDS port-group/logical switch but not Transport zone, security group/tags

When check the log, we see the following:

Workflow ‘vSphereVCNSInventory’ failed with the following exception:

One or more errors occurred.

Inner Exception: An error occurred while sending the request.

at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)

at DynamicOps.VCNSModel.Interface.NSXClient.GetDatacenters()

at DynamicOps.VCNSModel.Activities.CollectDatacenters.Execute(CodeActivityContext context)

at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)

at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

Inner Exception:

VCNS Workflow failure

I tried to delete NSX end point and recreate from vRA but no luck. I raised the issue in vmware community but can’t get any real valuable feedback.

After a few hours investigation, finally I find a fix:

run the “create a NSX endpoint” workflow in vRO as the below

2017-07-26_184701

Then I re-start network & security data collection in vRA. Everything works and I can see all defined NSX Transport Zone, security groups and DLR in vRA network reservations.

Hope this fix can help others who have the same issue.

Perform Packet Capture on VMware ESXi Host for NSX Troubleshooting

VMware offers a great and powerful tool pktcap-uw to perform packet capture on ESXi host.

Pktcap-uw offers a lot of options for packet capture.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2051814

Here I show most common used in my daily life here for your reference. I normally perform a packet based on vSwitch port ID or DV filter (NSX DFW)

To do that, I firstly need to find the vSwitch port ID and DV filter ID on ESXi host so that I can refer them in your packet capture. I normally use “summarize-dvfilter” CLI to find the requested information.

[root@esx4005:/tmp] summarize-dvfilter | grep -C 10 1314
slowPathID: none
filter source: Dynamic Filter Creation
vNic slot 1
name: nic-18417802-eth0-dvfilter-generic-vmware-swsec.1
agentName: dvfilter-generic-vmware-swsec
state: IOChain Attached
vmState: Detached
failurePolicy: failClosed
slowPathID: none
filter source: Alternate Opaque Channel
world 18444553 vmm0:auslslnxsd1314-113585a5-f6ed-4eb3-abd2-12083901e942 vcUuid:’11 35 85 a5 f6 ed 4e b3-ab d2 12 08 39 01 e9 42′
port 33554558 (vSwitch PortID) auslslnxsd1314-113585a5-f6ed-4eb3-abd2-12083901e942.eth0
vNic slot 2
name: nic-18444553-eth0-vmware-sfw.2 (DV Filter ID)
agentName: vmware-sfw
state: IOChain Attached
vmState: Detached
failurePolicy: failClosed
slowPathID: none
filter source: Dynamic Filter Creation
vNic slot 1
name: nic-18444553-eth0-dvfilter-generic-vmware-swsec.1

 

After I have the vSwitch port ID and DV filter ID, I can start my packet capture.

  • Packet capture to a VM based on vSwitch PortID

pktcap-uw –switchport 33554558 —dir 0 -o /tmp/from1314.pcap

  • Packet capture from a VM based on vSwitch PortID

pktcap-uw –switchport 33554558 —dir 1 -o /tmp/to1314.pcap

  • Packet capture from a VM based on DV filter

pktcap-uw –capture PreDVFilter –dvfilter nic-18444553-eth0-vmware-sfw.2 -o /tmp/1314v3.pcap

Below is a brief explanation of the parameters which we use in the above.

-o (output): save the capture as a packet capture file;

-dir (direction): 0 for traffic to VM and 1 for traffic from VM;

-PreDVFilter: perform packet capture before DFW rules are applied;

-PostDVFilter: perform packet capture after DFW rules are applied;

In addition, you can add filter as well for your capture:

pktcap-uw –switchport 33554558 –tcpport 9000 –dir 1 -o /tmp/from1314.pcap

I list all available filter options here for your reference:

–srcmac
The Ethernet source MAC address.
–dstmac
The Ethernet destination MAC address.
–mac
The Ethernet MAC address(src or dst).
–ethtype
The Ethernet type. HEX format.
–vlan
The Ethernet VLAN ID.
–srcip
The source IP address.
–dstip
The destination IP address.
–ip
The IP address(src or dst).
–proto 0x
The IP protocol.
–srcport
The TCP source port.
–dstport
The TCP destination port.
–tcpport
The TCP port(src or dst).
–vxlan
The vxlan id of flow.

Update:

Start 2 capture at the same time:

pktcap-uw –switchport 50331665 -o /tmp/50331665.pcap & pktcap-uw –uplink vmnic2 -o /tmp/vmnic2.pcap &

Stop all packet capture:

kill $(lsof |grep pktcap-uw |awk ‘{print $1}’| sort -u)

Of course, you can perform some basic packet capture in NSX manager via Central CLI. If you are interested in, please refer my another blog:

https://davidwzhang.com/2017/01/07/limitation-of-nsx-central-cli-packet-capture/

NSX IPSec Throughput in IBM Softlayer

To understand the real throughput capacity of NSX IPSec in Softlayer, I built a quick IPSec performance testing environment.

Below are the network topology of my testing environment:

NSX_IPSec_Performance_Topology

NSX version: 6.2.4
NSX Edge: X-Large (6 vCPUs and 8G Memory), which is the largest size NSX offers. All of Edges in this testing enviroment reside in the same vSphere cluster which include 3 ESXi hosts. Each ESXi host has 64GB DDR4 Memory and 2 processors (2.4GHz Intel Xeon-Haswell (E5-2620-V3-HexCore))
IPerf Client: Redhat 7.1 (2 vCPUs and 4GB Memory)
IPerf Server: Redhat 7.1 (2 vCPUs and 4GB Memory)
IPerf version: IPerf3

2 IPsec tunnels are built as the above diagram. IPSec setting is:

  • Encryption: AES-GCM
  • Diff-Hellman Group: DH5
  • PFS(Perfect forward secrecy): Enabled
  • AESNI: Enabled
I include 3 test cases in my testing:
Test1_Bandwidth_Utilisation
  • Test Case 2: 2 IPerf Clients (172.16.31.0/24) to 2 IPerf Servers (172.16.38.0/24) via 1 IPsec Tunnel. Result: around 1.6-2.3Gbit/s in total
Test2_Bandwidth_Utilisation
Test3_Bandwidth_Utilisation
Please note:
  1. Firewall function on NSX Edge is disabled in all test cases.
  2. TCP traffic is used in all 3 test cases. 10 parallel streams are used to push the performance test to the max on each IPerf Client.
  3. I didn’t see any CPU or Memory contention in all test cases: the CPU utilisation of NSX Edge was  less than 40% and memory utilisation is nearly zero.

CPU_Mem