Install Docker Offline on Centos7

Recently, I had to build an environment which have a kind of real web application running to test LBaaS site affinity solution,. After a few minutes,I made a decision to install a Jenkins container on my testing Centos 7 virtual machines. 

Unfortunately, my Centos virtual machines have no Internet access. So I spent a bit of time to work out how to installl docker and run a container offline on Centos 7. Then I have this blog which maybe can help others who have the same challenge.

The docker version which I am going to install is: 
docker-ce-18.03.1.ce-1.el7.centos

On another Linux Centos 7 (minimum install) which have Internet access, I run the CLI below to identify all required packages for Docker offline installation.
repoquery -R docker-ce-18.03.1.ce-1.el7.centos
From the output, I found out that I need the following packages to complete Docker offline installation:

1:libsepol-2.5-8.1.el7
2:libselinux-2.5-12.el7
3:audit-libs-2.8.1-3.el7_5.1
4:libsemanage-2.5-11.el7
5:libselinux-utils-2.5-12.el7
6:policycoreutils-2.5-22.el7
7:selinux-policy-3.13.1-192.el7
8:libcgroup-0.41-15.el7
9:selinux-policy-targeted-3.13.1-19
10:libsemanage-python-2.5-11.el7
11:audit-libs-python-2.8.1-3.el7_5.1
12:setools-libs-3.3.8-2.el7
13:python-IPy-0.75-6.el7
14:pigz-2.3.3-1.el7.centos
15:checkpolicy-2.5-6.el7
16:policycoreutils-python-2.5-22.el7
17:container-selinux-2:2.68-1.el7
18:docker-ce-18.03.1.ce-1.el7.centos
19:audit-2.8.1-3.el7_5.1

Then I went to download docker rpm package and all dependent packages with yumdownloader:
yumdownloader –resolve  docker-ce-18.03.1.ce-1.el7.centos

I archived the above packages (tar cf docker-ce.offline.tar *.rpm) and uploaded to my offline Centos 7 virtual machines. Then use the rpm CLI to install Docker:

[root@lbaas02 ~]# rpm -ivh –replacefiles –replacepkgs *.rpm

warning: audit-2.8.1-3.el7_5.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEYwarning: docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEYPreparing…                          ################################# [100%]Updating / installing…   1:libsepol-2.5-8.1.el7             ################################# [  5%]   2:libselinux-2.5-12.el7            ################################# [ 11%]   3:audit-libs-2.8.1-3.el7_5.1       ################################# [ 16%]   4:libsemanage-2.5-11.el7           ################################# [ 21%]   5:libselinux-utils-2.5-12.el7      ################################# [ 26%]   6:policycoreutils-2.5-22.el7       ################################# [ 32%]   7:selinux-policy-3.13.1-192.el7    ################################# [ 37%]   8:libcgroup-0.41-15.el7            ################################# [ 42%]   9:selinux-policy-targeted-3.13.1-19################################# [ 47%]  10:libsemanage-python-2.5-11.el7    ################################# [ 53%]  11:audit-libs-python-2.8.1-3.el7_5.1################################# [ 58%]  12:setools-libs-3.3.8-2.el7         ################################# [ 63%]  13:python-IPy-0.75-6.el7            ################################# [ 68%]  14:pigz-2.3.3-1.el7.centos          ################################# [ 74%]  15:checkpolicy-2.5-6.el7            ################################# [ 79%]  16:policycoreutils-python-2.5-22.el7################################# [ 84%]  17:container-selinux-2:2.68-1.el7   ################################# [ 89%]  18:docker-ce-18.03.1.ce-1.el7.centos################################# [ 95%]  19:audit-2.8.1-3.el7_5.1            ################################# [100%]

After the installation completed,  started and enabled docker service:

[root@lbaas02 ~]# systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@lbaas02 ~]# systemctl start docker

Now the next question for me is to import the offline Jenkins docker image. Firstly, I pulled the Jenkisn docker image:

docker pull jenkins/jenkins

Then exported the docker image as a file and uploaded to my testing Centos.

docker save -o jenkins.docker jenkins/jenkins

On my testing Centos, I loaded the image to docker process.

[root@lbaas01 ~]# docker load -i jenkins.docker

 f715ed19c28b: Loading layer [==================================================>]  105.5MB/105.5MB 8bb25f9cdc41: Loading layer [==================================================>]  23.99MB/23.99MB 08a01612ffca: Loading layer [==================================================>]  7.994MB/7.994MB 1191b3f5862a: Loading layer [==================================================>]  146.4MB/146.4MB 097524d80f54: Loading layer [==================================================>]  2.332MB/2.332MB 685f72a7cd4f: Loading layer [==================================================>]  3.584kB/3.584kB  9c147c576d67: Loading layer [==================================================>]  1.536kB/1.536kB   e9805f9bdc9e: Loading layer [==================================================>]  356.3MB/356.3MB 8b47d19735d5: Loading layer [==================================================>]  362.5kB/362.5kB e2a15a753d48: Loading layer [==================================================>]  338.9kB/338.9kB 287c6d658570: Loading layer [==================================================>]  3.584kB/3.584kB 5e9d64b80844: Loading layer [==================================================>]  9.728kB/9.728kB   be6e5f898997: Loading layer [==================================================>]  868.9kB/868.9kB  609adfa44126: Loading layer [==================================================>]  4.608kB/4.608kB  a26f92334a9c: Loading layer [==================================================>]  75.92MB/75.92MB de90b90d0715: Loading layer [==================================================>]  4.608kB/4.608kB  13d8fca176c6: Loading layer [==================================================>]  9.216kB/9.216kB   be0781510eef: Loading layer [==================================================>]  4.608kB/4.608kB   d7e644ce9f14: Loading layer [==================================================>]  3.072kB/3.072kB 47dd83bc99e4: Loading layer [==================================================>]  7.168kB/7.168kB  96e3e5ce2959: Loading layer [==================================================>]  12.29kB/12.29kB               Loaded image: jenkins/jenkins:latest

[root@lbaas01 ~]# docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

jenkins/jenkins     latest              51158f0cf7bc        6 days ago          701MB

Now I am able to start my Jenkins docker on this offline Centos 7.

docker run -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins

Wait for 2-3 mins. After Jenkins container is fully running, I can login into my Jenkins.:)

NSX-T Routing Path

In this blog, I will show you the routing path for different NSX-T Edge cluster deployment options.

  • The 1st is the simplest scenario: we have a Edge Cluster and there is not any Tier-1 SR. So we will only have Tier-0 DR and Tier-0 SR running in this NSX Edge Cluster.  In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.

Pattern1

  • In the 2nd scenario, Tier-1 vRouter includes Tier-1 DR and Tier-1 SR. Both Tier-1 SR and Tier-0 SR are running in the same NSX Edge Cluster. This design is to provide NAT, Firewall function at Tier-1 level via Tier1-SR. In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.

Pattern2

 

  • In the 3nd scenario, we have 2 Edge clusters:
    • NSX-T T1 Edge Cluster: dedicated for Tier-1 SR/SRs, which is dedicated for running centralized service (e.g. NAT);
    • NSX-T T0 Edge Cluster: dedicated for Tier-0 SR/SRs, which provides uplink connectivity to the physical infrastructure;

This option gives better scalability and creates isolated service domains for Tier-0 and Tier-1. Similarly, I used the orange line to show the northbound path and the dark green line to show the southbound path in the diagram below:

 

Pattern3

SR-IOV Performance on Centos7 VM

This blog is to demonstrate network performance (network throughput here only) for a SR-IOV enabled Centos7 virtual machine which is running on vSphere 6. Regarding the vSphere 6.5 support to SR-IOV, please refer the link below:

Single Root I/O Virtualization

My testing environment is on IBM Cloud:

Virtual machine specification:

  • 4 vCPU/16G Memory;
  • OS: Centos Linux release 7.4.1708 (core)
  • Reserve All Guest Memory (this is a mandatory requirment for SR-IOV but I enable it for all testing VMs)

ESXi hosts: we use 2 ESX hosts (host10 and host11) for our testing.  SR-IOV is enabled on a 10G NIC

  • Supermicro PIO-618U-T4T+-ST031,
  • Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
  • 512GB Memory

Host10

ESXi host specification

Host11

ESXi host specification-host11

Testing Tool: IPerf3 version 3.1.3. Default setting is used.

Note: I have only 4 VMs running on 2 vSphere ESXi hosts in my testing environment to remove the impact for resource contention. In addition, all 4 VMs are in the same layer 2 network to remove any potential bottleneck when perform the network throughput testing using IPerf3.

SR-IOV01

Virtual Machine1 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi Host:  host10

[root@networktest0 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)
03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest0 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

NetworkTest0

Virtual Machine2 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi host: host11

[root@networktest1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest1 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: ye

NetworkTest1

Virtual Machine3 (SR-IOV enabled)

  • Hostname: srIOV 
  • IP Address: 10.139.36.180
  • ESXi host: host10

[root@sriov ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)—–same as the ethernet controller (X540-AT2) of vSphere ESXi host

[root@sriov ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

srIOV

Virtual Machine4 (SR-IOV enabled)

  • Hostname: srIOV1
  • IP Address: 10.139.36.181
  • ESXi host: host11

[root@sriov1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)
[root@sriov1 ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
srIOV1

Test1: from Virtual Machine1 to Virtual Machine2:

[root@networktest0 ~]# iperf3 -c 10.139.36.179 -t 300

[ 4] 290.00-291.00 sec 809 MBytes 6.79 Gbits/sec 29 725 KBytes
[ 4] 291.00-292.00 sec 802 MBytes 6.72 Gbits/sec 32 680 KBytes
[ 4] 292.00-293.00 sec 631 MBytes 5.30 Gbits/sec 52 711 KBytes
[ 4] 293.00-294.00 sec 773 MBytes 6.48 Gbits/sec 9 902 KBytes
[ 4] 294.00-295.00 sec 800 MBytes 6.71 Gbits/sec 27 856 KBytes
[ 4] 295.00-296.00 sec 801 MBytes 6.72 Gbits/sec 36 790 KBytes
[ 4] 296.00-297.00 sec 774 MBytes 6.49 Gbits/sec 52 694 KBytes
[ 4] 297.00-298.00 sec 815 MBytes 6.83 Gbits/sec 30 656 KBytes
[ 4] 298.00-299.00 sec 649 MBytes 5.45 Gbits/sec 35 689 KBytes
[ 4] 299.00-300.00 sec 644 MBytes 5.40 Gbits/sec 57 734 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec 10797 sender
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec receiver

iperf Done.
[root@networktest0 ~]#

Test2: from Virtual Machine2 to Virtual Machine1

[root@networktest1 ~]# iperf3 -c 10.139.36.178 -t 300
Connecting to host 10.139.36.178, port 5201
[ 4] local 10.139.36.179 port 54844 connected to 10.139.36.178 port 5201

[ 4] 290.00-291.00 sec 794 MBytes 6.66 Gbits/sec 6 908 KBytes
[ 4] 291.00-292.00 sec 811 MBytes 6.80 Gbits/sec 8 871 KBytes
[ 4] 292.00-293.00 sec 810 MBytes 6.80 Gbits/sec 10 853 KBytes
[ 4] 293.00-294.00 sec 810 MBytes 6.79 Gbits/sec 12 819 KBytes
[ 4] 294.00-295.00 sec 811 MBytes 6.80 Gbits/sec 19 783 KBytes
[ 4] 295.00-296.00 sec 810 MBytes 6.79 Gbits/sec 14 747 KBytes
[ 4] 296.00-297.00 sec 776 MBytes 6.51 Gbits/sec 9 639 KBytes
[ 4] 297.00-298.00 sec 778 MBytes 6.52 Gbits/sec 7 874 KBytes
[ 4] 298.00-299.00 sec 809 MBytes 6.78 Gbits/sec 13 851 KBytes
[ 4] 299.00-300.00 sec 810 MBytes 6.80 Gbits/sec 11 810 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec 4269 sender
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec receiver

iperf Done.

Test3:  from Virtual Machine3 to Virtual Machine4

[root@sriov ~]# iperf3 -c 10.139.36.181 -t 300 -V
iperf 3.1.3
Linux sriov 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:15:18 GMT
Connecting to host 10.139.36.181, port 5201
Cookie: sriov.1511072118.047298.4aefd6730c42
TCP MSS: 1448 (default)
[ 4] local 10.139.36.180 port 56330 connected to 10.139.36.181 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.09 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.10 MBytes
[ 4] 2.00-3.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.13 MBytes

[ 4] 290.00-291.00 sec 1.06 GBytes 9.14 Gbits/sec 15 1.12 MBytes
[ 4] 291.00-292.00 sec 1.06 GBytes 9.09 Gbits/sec 13 928 KBytes
[ 4] 292.00-293.00 sec 1.05 GBytes 9.00 Gbits/sec 26 1003 KBytes
[ 4] 293.00-294.00 sec 1.07 GBytes 9.22 Gbits/sec 115 1.06 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.23 MBytes
[ 4] 295.00-296.00 sec 1.06 GBytes 9.10 Gbits/sec 79 942 KBytes
[ 4] 296.00-297.00 sec 1.05 GBytes 9.03 Gbits/sec 29 1.02 MBytes
[ 4] 297.00-298.00 sec 1.08 GBytes 9.25 Gbits/sec 6 1005 KBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1005 KBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1005 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec 12656 sender
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec receiver
CPU Utilization: local/sender 13.0% (0.2%u/12.9%s), remote/receiver 41.5% (1.1%u/40.4%s)

iperf Done.

Test4:  from Virtual Machine4 to Virtual Machine3

[root@sriov1 ~]# iperf3 -c 10.139.36.180 -t 300 -V
iperf 3.1.3
Linux sriov1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:30:09 GMT
Connecting to host 10.139.36.180, port 5201
Cookie: sriov1.1511073009.840403.56876d65774
TCP MSS: 1448 (default)
[ 4] local 10.139.36.181 port 46602 connected to 10.139.36.180 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.38 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.40 MBytes

[ 4] 289.00-290.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.31 MBytes
[ 4] 290.00-291.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.31 MBytes
[ 4] 291.00-292.00 sec 1.09 GBytes 9.41 Gbits/sec 329 945 KBytes
[ 4] 292.00-293.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.09 MBytes
[ 4] 293.00-294.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 295.00-296.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.27 MBytes
[ 4] 296.00-297.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 297.00-298.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.38 MBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec 14395 sender
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec receiver
CPU Utilization: local/sender 13.9% (0.2%u/13.7%s), remote/receiver 39.6% (1.0%u/38.6%s)

iperf Done.
[root@sriov1 ~]#

We can see that SR-IOV enabled Centos7 VM can achieve ~9.3Gbits/s throughput for both inbound and outbound traffic, which is very close to wire speed forwarding for a 10G port.

vRA7.3 and NSX Integration: Network Security Data Collection Failure

We are building vRA 7.3 . We added vCenter and NSX manager as endpoint in vRA. And associate NSX manager with vCenter. All of computing resource data collection works well but not NSX (network and security):

So in vRA reservation, we only can see vSphere cluster, vDS port-group/logical switch but not Transport zone, security group/tags

When check the log, we see the following:

Workflow ‘vSphereVCNSInventory’ failed with the following exception:

One or more errors occurred.

Inner Exception: An error occurred while sending the request.

at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)

at DynamicOps.VCNSModel.Interface.NSXClient.GetDatacenters()

at DynamicOps.VCNSModel.Activities.CollectDatacenters.Execute(CodeActivityContext context)

at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)

at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

Inner Exception:

VCNS Workflow failure

I tried to delete NSX end point and recreate from vRA but no luck. I raised the issue in vmware community but can’t get any real valuable feedback.

After a few hours investigation, finally I find a fix:

run the “create a NSX endpoint” workflow in vRO as the below

2017-07-26_184701

Then I re-start network & security data collection in vRA. Everything works and I can see all defined NSX Transport Zone, security groups and DLR in vRA network reservations.

Hope this fix can help others who have the same issue.