NSX-T Routing Path

In this blog, I will show you the routing path for different NSX-T Edge cluster deployment options.

  • The 1st is the simplest scenario: we have a Edge Cluster and there is not any Tier-1 SR. So we will only have Tier-0 DR and Tier-0 SR running in this NSX Edge Cluster.  In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.

Pattern1

  • In the 2nd scenario, Tier-1 vRouter includes Tier-1 DR and Tier-1 SR. Both Tier-1 SR and Tier-0 SR are running in the same NSX Edge Cluster. This design is to provide NAT, Firewall function at Tier-1 level via Tier1-SR. In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.

Pattern2

 

  • In the 3nd scenario, we have 2 Edge clusters:
    • NSX-T T1 Edge Cluster: dedicated for Tier-1 SR/SRs, which is dedicated for running centralized service (e.g. NAT);
    • NSX-T T0 Edge Cluster: dedicated for Tier-0 SR/SRs, which provides uplink connectivity to the physical infrastructure;

This option gives better scalability and creates isolated service domains for Tier-0 and Tier-1. Similarly, I used the orange line to show the northbound path and the dark green line to show the southbound path in the diagram below:

 

Pattern3

Install PowerCLI and PowerNSX Offline on RHEL7

With the release of PowerCLI 10.0.0, VMware adds support for Mac OS and Linux! Now you can install PowerCLI and PowerNSX on Linux System including RHEL, Centos, Unbuntu and Mac OS. To complete installation of VMware PowerCLI 10 and PowerNSX, firstly you need to install Powershell Core 6.0.

In most of enterprise environments, we won’t be so lucky to have Internet access for all your Redhat RHEL systems. In this blog, I will show you how to install Powershell, PowerCLI and PowerNSX offline on Red Hat Enterprise Linux Server.

Software version:

Red Hat Enterprise Linux Server release 7.5 (Maipo)

PowerShell v6.0.2

VMware PowerCLI 10.1.1

VMware PowerNSX 3.0.1110

Step 0: Prerequisite

You have another Windows workstation/server or Linux which have Internet access and Powershell installed so that we can download all required packages.

In addition, make sure that your RHEL meet the following prerequisites:

  • openssl devel (version 1.0.2k and above) package installed

[root@localhost Powershell]# rpm -qa | grep openssl
openssl-1.0.2k-12.el7.x86_64
xmlsec1-openssl-1.2.20-7.el7_4.x86_64
openssl-libs-1.0.2k-12.el7.x86_64
openssl-devel-1.0.2k-12.el7.x86_64

  • “Development tools” packages installed

You can find out which packages are included in the “Development Tools” packages by CLI: yum group info “Development Tools”

Step 1: Install PowerShell v6.0.2

Go to website https://packages.microsoft.com/rhel/7/prod/ to download the required packages including dotnet and powershell.

dotnet

pwsh

  • Installed the following dotnet packages via “rpm -ivh”

[root@localhost yum.repos.d]# rpm -qa | grep dotn
dotnet-runtime-2.0.5-2.0.5-1.x86_64
dotnet-runtime-deps-2.1-2.1.0-1.x86_64
dotnet-hostfxr-2.0.5-2.0.5-1.x86_64
dotnet-sdk-2.1.4-2.1.4-1.x86_64
dotnet-host-2.1.0-1.x86_64

  • Install  Powershell 6.0.2

rpm -ivh powershell-6.0.2-1.rhel.7.x86_64.rpm

After you successfully installed Powershell, you need to create “Modules” directory for PowerCLI and PowerNSX modules. This “Modules” directory is under your home directory: /home/username/.local/share/powershell/Modules for current user or /usr/local/share/powershell/Modules for all users.

Step 2: Install PowerCLI Core

Since PowerCLI version 6.5, you can’t download the PowerCLI package from VMware directly any longer. You have to connect to PowerShell Gallery via Internet to install PowerCLI. As our RHEL has no Internet access. We firstly need to use “Save-Module” to download the latest PowerCLI package then upload to our RHEL system for installation.

Save-Module -Name VMware.PowerCLI -Path /root/powershell/powercli10

After uploading all sub-directories to the RHEL server, you copy all directories/files under the “Modules” directory which you created in Step 1.

[root@localhost powershell]# cd Modules/
[root@localhost Modules]# ls -al
total 4
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 .
drwxr-xr-x. 5 root root 54 Jun 18 19:51 ..
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.DeployAutomation
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.ImageBuilder
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.PowerCLI
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.Vim
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Cis.Core
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Cloud
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Common
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Core
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.VimAutomation.HA
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.VimAutomation.HorizonView
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.License
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Nsxt
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.PCloud
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Sdk
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Srm
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Storage
drwxr-xr-x. 3 root root 21 Jun 19 08:52 VMware.VimAutomation.StorageUtility
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Vds
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Vmc
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.vROps
drwxr-xr-x. 3 root root 27 Jun 19 08:52 VMware.VumAutomation

Now your PowerCLI is nearly ready for use.

Issue “pwsh” from bash then you start PowerShell

[root@localhost Modules]# pwsh
PowerShell v6.0.2
Copyright (c) Microsoft Corporation. All rights reserved.

https://aka.ms/pscore6-docs
Type ‘help’ to get help.

PS /root/.local/share/powershell/Modules>

As VMware PowerCLI 10 release notes, not all modules are supported with PowerShell Core 6.0 on RHEL. So before you import the PowerCLI modules, you have to change the  “VMware.PowerCLI.psd1” file to only load supported modules. The location of “VMware.PowerCLI.psd1” file  is as below

[root@localhost 10.1.1.8827524]# pwd
/root/.local/share/powershell/Modules/VMware.PowerCLI/10.1.1.8827524
[root@localhost 10.1.1.8827524]# ls -al
total 64
drwxr-xr-x. 2 root root 115 Jun 19 09:45 .
drwxr-xr-x. 3 root root 28 Jun 19 08:51 ..
-rw-r–r–. 1 root root 15196 Jun 18 21:57 PSGetModuleInfo.xml
-rw-r–r–. 1 root root 16413 Jun 14 10:36 VMware.PowerCLI.cat
-rw-r–r–. 1 root root 11603 Jun 14 10:36 VMware.PowerCLI.ps1
-rw-r–r–. 1 root root 14692 Jun 19 09:45 VMware.PowerCLI.psd1

Edit the above file like below (comment each line which include the un-supported module by adding # in the beginning)

# Modules that must be imported into the global environment prior to importing this module
RequiredModules = @(
@{“ModuleName”=”VMware.VimAutomation.Sdk”;”ModuleVersion”=”10.1.0.8342078″}
@{“ModuleName”=”VMware.VimAutomation.Common”;”ModuleVersion”=”10.1.0.8342134″}
@{“ModuleName”=”VMware.Vim”;”ModuleVersion”=”6.7.0.8343295″}
@{“ModuleName”=”VMware.VimAutomation.Core”;”ModuleVersion”=”10.1.0.8344055″}
#@{“ModuleName”=”VMware.VimAutomation.Srm”;”ModuleVersion”=”10.0.0.7893900″}
#@{“ModuleName”=”VMware.VimAutomation.License”;”ModuleVersion”=”10.0.0.7893904″}
@{“ModuleName”=”VMware.VimAutomation.Vds”;”ModuleVersion”=”10.1.0.8344219″}
@{“ModuleName”=”VMware.VimAutomation.Vmc”;”ModuleVersion”=”10.0.0.7893902″}
@{“ModuleName”=”VMware.VimAutomation.Nsxt”;”ModuleVersion”=”10.1.0.8346947″}
#@{“ModuleName”=”VMware.VimAutomation.vROps”;”ModuleVersion”=”10.0.0.7893921″}
@{“ModuleName”=”VMware.VimAutomation.Cis.Core”;”ModuleVersion”=”10.1.0.8377811″}
#@{“ModuleName”=”VMware.VimAutomation.HA”;”ModuleVersion”=”6.5.4.7567193″}
#@{“ModuleName”=”VMware.VimAutomation.HorizonView”;”ModuleVersion”=”7.5.0.8827468″}
#@{“ModuleName”=”VMware.VimAutomation.PCloud”;”ModuleVersion”=”10.0.0.7893924″}
#@{“ModuleName”=”VMware.VimAutomation.Cloud”;”ModuleVersion”=”10.0.0.7893901″}
#@{“ModuleName”=”VMware.DeployAutomation”;”ModuleVersion”=”6.7.0.8250345″}
#@{“ModuleName”=”VMware.ImageBuilder”;”ModuleVersion”=”6.7.0.8250345″}
@{“ModuleName”=”VMware.VimAutomation.Storage”;”ModuleVersion”=”10.1.0.8313015″}
@{“ModuleName”=”VMware.VimAutomation.StorageUtility”;”ModuleVersion”=”1.2.0.0″}
#@{“ModuleName”=”VMware.VumAutomation”;”ModuleVersion”=”6.5.1.7862888″}
)

If we have not removed the unsupported from the list of “Must Import Module”, we will see error like below:

Import-Module : The VMware.ImageBuilder module is not currently supported on the Core edition of PowerShell.
At line:1 char:1
+ import-module VMware.PowerCLI
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (The VMware.Imag… of PowerShell.:String) [Import-Module], RuntimeException
+ FullyQualifiedErrorId : The VMware.ImageBuilder module is not currently supported on the Core edition of PowerShell.,Microsoft.PowerShell.Commands.ImportModuleCommand 

Now you are ready to import PowerCLI modules.

PS /root/.local/share/powershell/Modules> import-module VMware.PowerCLI
$<5> Welcome to VMware PowerCLI!

Log in to a vCenter Server or ESX host: Connect-VIServer
To find out what commands are available, type: Get-VICommand
To show searchable help for all PowerCLI commands: Get-PowerCLIHelp
Once you’ve connected, display all virtual machines: Get-VM
If you need more help, visit the PowerCLI community: Get-PowerCLICommunity

Copyright (C) VMware, Inc. All rights reserved.

PS /root/.local/share/powershell/Modules>

However, when use cmdlet Connect-VIServer to connect vCenter server, you will see an error similar like this:

Error
Connect-VIServer : 06/22/18 11:22:26 AM Connect-VIServer The libcurl library in use (7.29.0) and its SSL backend (“NSS/3.21 Basic ECC”) do not support custom handling of certificates. A libcurl built with OpenSSL is required.

The cause of this error is that RHEL libcurl library is too old which doesn’t support OpenSSL. Please refer the following link which suggests how to fix the above issue by getting curl 7.52.1 installed.

https://www.opentechshed.com/powercli-core-on-centos-7/

[root@localhost ~]# curl –version
curl 7.52.1 (x86_64-pc-linux-gnu) libcurl/7.52.1 OpenSSL/1.0.2k zlib/1.2.7
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: IPv6 Largefile NTLM NTLM_WB SSL libz UnixSockets HTTPS-proxy

When we try to use “Connect-VIServer” cmdlet again, we see another error. This will happen when you connect to vCenter via IP or your RHEL think the received certificate is not valid:

Connect-VIServer : 6/21/18 11:40:16 AM Connect-VIServer Error: Invalid server certificate. Use Set-PowerCLIConfiguration to set the value for the InvalidCertificateAction option to Ignore to ignore the certificate errors for this connection.
Additional Information: Could not establish trust relationship for the SSL/TLS secure channel with authority ‘10.1.1.2’.
At line:1 char:1
+ Connect-VIServer -Server 10.1.1.2
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [Connect-VIServer], ViSecurityNegotiationException
+ FullyQualifiedErrorId : Client20_ConnectivityServiceImpl_Reconnect_CertificateError,VMware.VimAutomation.ViCore.Cmdlets.Commands.ConnectVIServer

We have two options here:

  1. Get valid certifcate for vCenter;
  2. Change PowerCLI configuration to disable SSL certificate verification;

Although option 2 is not good from security point of view, I still show you here so that I can go ahead for PowerNSX installation.

PS /root/.local/share/powershell/Modules> Set-PowerCLIConfiguration -InvalidCertificateAction ignore -confirm:$false

Scope ProxyPolicy DefaultVIServerMode InvalidCertificateAction DisplayDeprecationWarnings WebOperationTimeout
Seconds
—– ———– ——————- ———————— ————————– ——————-
Session UseSystemProxy Multiple Ignore True 300
User Ignore
AllUsers

Step 3: Install PowerNSX

  • Create a sub-directory called “PowerNSX” under “Modules” directory

[root@localhost powershell]# cd Modules/
[root@localhost Modules]# ls -al
total 4
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 .
drwxr-xr-x. 5 root root 54 Jun 18 19:51 ..
drwxr-xr-x. 2 root root 48 Jun 19 14:01 PowerNSX

  • Download PowerNSX package from Github (https://github.com/vmware/powernsx) and upload the downloaded zip file to my RHEL server. Then unzip zip file and copy the  following 2 files into PowerNSX directory:

PowerNSX.psd1
PowerNSX.psm1

[root@localhost Modules]# ls -al PowerNSX/
total 1572
drwxr-xr-x. 2 root root 48 Jun 19 14:01 .
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 ..
-rwxr-xr-x. 1 root root 15738 Jun 19 14:01 PowerNSX.psd1
-rwxr-xr-x. 1 root root 1588500 Jun 19 14:00 PowerNSX.psm1

Now you are ready to start using PowerNSX on RHEL. In my example, I query the current transport-zone and create a logical switch called PowerNSX within found NSX transport zone.

PS /root/.local/share/powershell/Modules/PowerNSX> Import-Module PowerNSX
PS /root/.local/share/powershell/Modules/PowerNSX> Get-Module

ModuleType Version Name ExportedCommands
———- ——- —- —————-
Manifest 3.1.0.0 Microsoft.PowerShell.Management {Add-Content, Clear-Content, Clear-Item, Clear-ItemProperty…}
Manifest 3.1.0.0 Microsoft.PowerShell.Utility {Add-Member, Add-Type, Clear-Variable, Compare-Object…}
Script 3.0.1110 PowerNSX {Add-NsxDynamicCriteria, Add-NsxDynamicMemberSet, Add-NsxEdgeInterfaceAddress, Add-NsxFirewallExclusionListMember…}
Script 1.2 PSReadLine {Get-PSReadlineKeyHandler, Get-PSReadlineOption, Remove-PSReadlineKeyHandler, Set-PSReadlineKeyHandler…}
Manifest 10.1.1…. VMware.PowerCLI
Script 6.7.0.8… VMware.Vim
Script 10.1.0…. VMware.VimAutomation.Cis.Core {Connect-CisServer, Disconnect-CisServer, Get-CisService}
Script 10.1.0…. VMware.VimAutomation.Common
Script 10.1.0…. VMware.VimAutomation.Core {Add-PassthroughDevice, Add-VirtualSwitchPhysicalNetworkAdapter, Add-VMHost, Add-VMHostNtpServer…}
Script 10.1.0…. VMware.VimAutomation.Nsxt {Connect-NsxtServer, Disconnect-NsxtServer, Get-NsxtService}
Script 10.1.0…. VMware.VimAutomation.Sdk {Get-InstallPath, Get-PSVersion}
Script 10.1.0…. VMware.VimAutomation.Storage {Add-KeyManagementServer, Copy-VDisk, Export-SpbmStoragePolicy, Get-KeyManagementServer…}
Script 1.2.0.0 VMware.VimAutomation.StorageUtility Update-VmfsDatastore
Script 10.1.0…. VMware.VimAutomation.Vds {Add-VDSwitchPhysicalNetworkAdapter, Add-VDSwitchVMHost, Export-VDPortGroup, Export-VDSwitch…}
Script 10.0.0…. VMware.VimAutomation.Vmc {Connect-Vmc, Disconnect-Vmc, Get-VmcService}

PS /root/.local/share/powershell/Modules/PowerNSX> Connect-NsxServer -vCenterServer 10.1.1.2

Windows PowerShell credential request
vCenter Server SSO Credentials
Password for user user1@davidwzhang.com: ***********
 
Using existing PowerCLI connection to 10.1.1.2
 
 
Version             : 6.4.0
BuildNumber         : 7564187
Credential          : System.Management.Automation.PSCredential
Server              : 10.1.1.4
Port                : 443
Protocol            : https
UriPrefix           :
ValidateCertificate : False
VIConnection        : 10.1.1.2
DebugLogging        : False
DebugLogfile        : \PowerNSXLog-user1@davidwzhang.com:@-2018_06_15_15_25_45.log
 

PS /root/.local/share/powershell/Modules/PowerNSX> Get NsxTransportZone                                                                 

objectId           : vdnscope-1
objectTypeName     : VdnScope
vsmUuid            : 42267595-0C79-1E95-35FE-E0A186F24C3B
nodeId             : 0598778a-9c46-46e7-a9c7-850beb6ac7f3
revision           : 14
type               : type
name               : transport-1
description        : transport-1
clientHandle       :
extendedAttributes :
isUniversal        : false
universalRevision  : 0
id                 : vdnscope-1
clusters           : clusters
virtualWireCount   : 59
controlPlaneMode   : UNICAST_MODE
cdoModeEnabled     : false
cdoModeState       : cdoModeState
 
PS /root/.local/share/powershell/Modules/PowerNSX> Get-NsxTransportZone  transport-1 | New-NsxLogicalSwitch -name PowerNSX
objectId              : virtualwire-65
objectTypeName        : VirtualWire
vsmUuid               : 42267595-0C79-1E95-35FE-E0A186F24C3B
nodeId                : 0598778a-9c46-46e7-a9c7-850beb6ac7f3
revision              : 2
type                  : type
name                  : PowerNSX
description           :
clientHandle          :
extendedAttributes    :
isUniversal           : false
universalRevision     : 0
tenantId              :
vdnScopeId            : vdnscope-1
vdsContextWithBacking : vdsContextWithBacking
vdnId                 : 6059
guestVlanAllowed      : false
controlPlaneMode      : UNICAST_MODE
ctrlLsUuid            : d6f2c975-8927-429c-86f7-3ae0b9ecd9fa
macLearningEnabled    : false

 

When we checked NSX manager, we can see PowerNSX logical switch is created with VXLAN-ID 6059.vxlan

 

SR-IOV Performance on Centos7 VM

This blog is to demonstrate network performance (network throughput here only) for a SR-IOV enabled Centos7 virtual machine which is running on vSphere 6. Regarding the vSphere 6.5 support to SR-IOV, please refer the link below:

Single Root I/O Virtualization

My testing environment is on IBM Cloud:

Virtual machine specification:

  • 4 vCPU/16G Memory;
  • OS: Centos Linux release 7.4.1708 (core)
  • Reserve All Guest Memory (this is a mandatory requirment for SR-IOV but I enable it for all testing VMs)

ESXi hosts: we use 2 ESX hosts (host10 and host11) for our testing.  SR-IOV is enabled on a 10G NIC

  • Supermicro PIO-618U-T4T+-ST031,
  • Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
  • 512GB Memory

Host10

ESXi host specification

Host11

ESXi host specification-host11

Testing Tool: IPerf3 version 3.1.3. Default setting is used.

Note: I have only 4 VMs running on 2 vSphere ESXi hosts in my testing environment to remove the impact for resource contention. In addition, all 4 VMs are in the same layer 2 network to remove any potential bottleneck when perform the network throughput testing using IPerf3.

SR-IOV01

Virtual Machine1 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi Host:  host10

[root@networktest0 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)
03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest0 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

NetworkTest0

Virtual Machine2 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi host: host11

[root@networktest1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest1 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: ye

NetworkTest1

Virtual Machine3 (SR-IOV enabled)

  • Hostname: srIOV 
  • IP Address: 10.139.36.180
  • ESXi host: host10

[root@sriov ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)—–same as the ethernet controller (X540-AT2) of vSphere ESXi host

[root@sriov ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

srIOV

Virtual Machine4 (SR-IOV enabled)

  • Hostname: srIOV1
  • IP Address: 10.139.36.181
  • ESXi host: host11

[root@sriov1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)
[root@sriov1 ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
srIOV1

Test1: from Virtual Machine1 to Virtual Machine2:

[root@networktest0 ~]# iperf3 -c 10.139.36.179 -t 300

[ 4] 290.00-291.00 sec 809 MBytes 6.79 Gbits/sec 29 725 KBytes
[ 4] 291.00-292.00 sec 802 MBytes 6.72 Gbits/sec 32 680 KBytes
[ 4] 292.00-293.00 sec 631 MBytes 5.30 Gbits/sec 52 711 KBytes
[ 4] 293.00-294.00 sec 773 MBytes 6.48 Gbits/sec 9 902 KBytes
[ 4] 294.00-295.00 sec 800 MBytes 6.71 Gbits/sec 27 856 KBytes
[ 4] 295.00-296.00 sec 801 MBytes 6.72 Gbits/sec 36 790 KBytes
[ 4] 296.00-297.00 sec 774 MBytes 6.49 Gbits/sec 52 694 KBytes
[ 4] 297.00-298.00 sec 815 MBytes 6.83 Gbits/sec 30 656 KBytes
[ 4] 298.00-299.00 sec 649 MBytes 5.45 Gbits/sec 35 689 KBytes
[ 4] 299.00-300.00 sec 644 MBytes 5.40 Gbits/sec 57 734 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec 10797 sender
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec receiver

iperf Done.
[root@networktest0 ~]#

Test2: from Virtual Machine2 to Virtual Machine1

[root@networktest1 ~]# iperf3 -c 10.139.36.178 -t 300
Connecting to host 10.139.36.178, port 5201
[ 4] local 10.139.36.179 port 54844 connected to 10.139.36.178 port 5201

[ 4] 290.00-291.00 sec 794 MBytes 6.66 Gbits/sec 6 908 KBytes
[ 4] 291.00-292.00 sec 811 MBytes 6.80 Gbits/sec 8 871 KBytes
[ 4] 292.00-293.00 sec 810 MBytes 6.80 Gbits/sec 10 853 KBytes
[ 4] 293.00-294.00 sec 810 MBytes 6.79 Gbits/sec 12 819 KBytes
[ 4] 294.00-295.00 sec 811 MBytes 6.80 Gbits/sec 19 783 KBytes
[ 4] 295.00-296.00 sec 810 MBytes 6.79 Gbits/sec 14 747 KBytes
[ 4] 296.00-297.00 sec 776 MBytes 6.51 Gbits/sec 9 639 KBytes
[ 4] 297.00-298.00 sec 778 MBytes 6.52 Gbits/sec 7 874 KBytes
[ 4] 298.00-299.00 sec 809 MBytes 6.78 Gbits/sec 13 851 KBytes
[ 4] 299.00-300.00 sec 810 MBytes 6.80 Gbits/sec 11 810 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec 4269 sender
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec receiver

iperf Done.

Test3:  from Virtual Machine3 to Virtual Machine4

[root@sriov ~]# iperf3 -c 10.139.36.181 -t 300 -V
iperf 3.1.3
Linux sriov 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:15:18 GMT
Connecting to host 10.139.36.181, port 5201
Cookie: sriov.1511072118.047298.4aefd6730c42
TCP MSS: 1448 (default)
[ 4] local 10.139.36.180 port 56330 connected to 10.139.36.181 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.09 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.10 MBytes
[ 4] 2.00-3.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.13 MBytes

[ 4] 290.00-291.00 sec 1.06 GBytes 9.14 Gbits/sec 15 1.12 MBytes
[ 4] 291.00-292.00 sec 1.06 GBytes 9.09 Gbits/sec 13 928 KBytes
[ 4] 292.00-293.00 sec 1.05 GBytes 9.00 Gbits/sec 26 1003 KBytes
[ 4] 293.00-294.00 sec 1.07 GBytes 9.22 Gbits/sec 115 1.06 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.23 MBytes
[ 4] 295.00-296.00 sec 1.06 GBytes 9.10 Gbits/sec 79 942 KBytes
[ 4] 296.00-297.00 sec 1.05 GBytes 9.03 Gbits/sec 29 1.02 MBytes
[ 4] 297.00-298.00 sec 1.08 GBytes 9.25 Gbits/sec 6 1005 KBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1005 KBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1005 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec 12656 sender
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec receiver
CPU Utilization: local/sender 13.0% (0.2%u/12.9%s), remote/receiver 41.5% (1.1%u/40.4%s)

iperf Done.

Test4:  from Virtual Machine4 to Virtual Machine3

[root@sriov1 ~]# iperf3 -c 10.139.36.180 -t 300 -V
iperf 3.1.3
Linux sriov1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:30:09 GMT
Connecting to host 10.139.36.180, port 5201
Cookie: sriov1.1511073009.840403.56876d65774
TCP MSS: 1448 (default)
[ 4] local 10.139.36.181 port 46602 connected to 10.139.36.180 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.38 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.40 MBytes

[ 4] 289.00-290.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.31 MBytes
[ 4] 290.00-291.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.31 MBytes
[ 4] 291.00-292.00 sec 1.09 GBytes 9.41 Gbits/sec 329 945 KBytes
[ 4] 292.00-293.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.09 MBytes
[ 4] 293.00-294.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 295.00-296.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.27 MBytes
[ 4] 296.00-297.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 297.00-298.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.38 MBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec 14395 sender
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec receiver
CPU Utilization: local/sender 13.9% (0.2%u/13.7%s), remote/receiver 39.6% (1.0%u/38.6%s)

iperf Done.
[root@sriov1 ~]#

We can see that SR-IOV enabled Centos7 VM can achieve ~9.3Gbits/s throughput for both inbound and outbound traffic, which is very close to wire speed forwarding for a 10G port.

NSX IPSec Throughput in IBM Softlayer

To understand the real throughput capacity of NSX IPSec in Softlayer, I built a quick IPSec performance testing environment.

Below are the network topology of my testing environment:

NSX_IPSec_Performance_Topology

NSX version: 6.2.4
NSX Edge: X-Large (6 vCPUs and 8G Memory), which is the largest size NSX offers. All of Edges in this testing enviroment reside in the same vSphere cluster which include 3 ESXi hosts. Each ESXi host has 64GB DDR4 Memory and 2 processors (2.4GHz Intel Xeon-Haswell (E5-2620-V3-HexCore))
IPerf Client: Redhat 7.1 (2 vCPUs and 4GB Memory)
IPerf Server: Redhat 7.1 (2 vCPUs and 4GB Memory)
IPerf version: IPerf3

2 IPsec tunnels are built as the above diagram. IPSec setting is:

  • Encryption: AES-GCM
  • Diff-Hellman Group: DH5
  • PFS(Perfect forward secrecy): Enabled
  • AESNI: Enabled
I include 3 test cases in my testing:
Test1_Bandwidth_Utilisation
  • Test Case 2: 2 IPerf Clients (172.16.31.0/24) to 2 IPerf Servers (172.16.38.0/24) via 1 IPsec Tunnel. Result: around 1.6-2.3Gbit/s in total
Test2_Bandwidth_Utilisation
Test3_Bandwidth_Utilisation
Please note:
  1. Firewall function on NSX Edge is disabled in all test cases.
  2. TCP traffic is used in all 3 test cases. 10 parallel streams are used to push the performance test to the max on each IPerf Client.
  3. I didn’t see any CPU or Memory contention in all test cases: the CPU utilisation of NSX Edge was  less than 40% and memory utilisation is nearly zero.

CPU_Mem

Converting between image formats using qemu-img

Converting images from one format to another is generally straightforward.

qemu-img is the tool which I use a lot. qemu-img can convert different fomat including raw, qcow2, qed, vdi, vmdk, vhd.

On Centos, qemu-img can only be installed on 64bit version.

As an example, I will show you how to convert Netscaler VPX raw image to qcow2 for my unetlab virtual appliance. If you want to know more about unetlab. Please go to http://www.unetlab.com

Step 1: Install qemu-img

yum install qemu-img

Step 2: Download the KVM version of Netscaler VPX and extract the raw image from the tgz file

tar -xzvf NSVPX-KVM-10.5-55.8_nc.tgz 

[root@localhost tmp]# ls -al NSVPX-KVM-10.5-55.8_nc.raw
-rw-r–r–. 1 root root 21474836480 Jan 25  2015 NSVPX-KVM-10.5-55.8_nc.raw

Step 3: convert the raw image to qcow2 format image

qemu-img convert -f raw -O qcow2 NSVPX-KVM-10.5-55.8_nc.raw virtioa.qcow2

Then you can see the qcow2 file

[root@localhost tmp]# ls -al

-rw-r–r–.  1 root   root     293076992 Aug  8 01:31 virtioa.qcow2

FYI:

VM format

Create Your Own Unetlab Juniper vMX Image

According to Unetlab, Unetlab support the following version vMX:

14.1R1.10-domestic and 14.1R4.8-domestic

I can get the 14.1R1.10 easily from Internet. However, I can’t find a packaged 14.1R4.8 image for Unetlab. I went to Unetlab and ask for help. Unfortunately, nobody responded me. As I need to use the newer version for my eVPN lab (eVPN HA), I had to try my own best to get one.

I downloaded the 14.1R4.8 doemestic img file from Internet: jinstall-vmx-14.1R4.8-domestic.img

Interestingly, I used CLI qemu-img and found that the Juniper image file is qcow2 format! cheeky

[root@localhost tmp]# qemu-img info jinstall-vmx-14.1R4.8-domestic.img | grep file.format
file format: qcow2

Like adding a normal image, I created a new folder /opt/unetlab/addons/qemu/vmx-14.1R4.8. Then uploaded the file “jinstall-vmx-14.1R4.8-domestic.img” to this newly created folder and renamed the files as “hda.qcow2”.

Till now, in the Unetlabe GUI, I am able to select this new Junos version for my vMX. Although I can power on this new version vMX, I still can’t really use it for my eVPN lab. The reason is:

Since 14.1R4, Juniper vMX will try to connect to a remote PFE, which means a different virtual machine as PFE. To change this default behaviour and use local PFE, I need to add a new line

vm_local_rpio=”1″  to /boot/loader.conf and save the file. The change need reboot to take effect. After reboot, I am able to use this new Juniper vMX in my eVPN lab.

root@vMX> show version
Hostname: vMX
Model: vmx
Junos: 14.1R4.8
JUNOS Base OS Software Suite [14.1R4.8]
JUNOS Base OS boot [14.1R4.8]
JUNOS Crypto Software Suite [14.1R4.8]
JUNOS Online Documentation [14.1R4.8]
JUNOS Kernel Software Suite [14.1R4.8]
JUNOS Routing Software Suite [14.1R4.8]
JUNOS Runtime Software Suite [14.1R4.8]
JUNOS Services AACL PIC package [14.1R4.8]
JUNOS Services Application Level Gateway [14.1R4.8]
JUNOS Services Application Level Gateway (xlp64) [14.1R4.8]
JUNOS Services Application Level Gateway (xlr64) [14.1R4.8]
JUNOS AppId Services PIC Package [14.1R4.8]
JUNOS Services AppId PIC package (xlr64) [14.1R4.8]
JUNOS Border Gateway Function PIC package [14.1R4.8]
JUNOS Services Captive Portal and Content Delivery PIC package [14.1R4.8]
JUNOS Services HTTP Content Management PIC package [14.1R4.8]
JUNOS Services HTTP Content Management PIC package (xlr64) [14.1R4.8]
JUNOS IDP Services PIC Package [14.1R4.8]
JUNOS Services JFLOW PIC package [14.1R4.8]
JUNOS Services JFLOW PIC package (xlp64) [14.1R4.8]
JUNOS Services LL-PDF PIC package [14.1R4.8]
JUNOS MobileNext PIC package [14.1R4.8]
JUNOS MobileNext PIC package (xlr64) [14.1R4.8]
JUNOS Services Mobile Subscriber Service Container package [14.1R4.8]
JUNOS Services Mobile Subscriber Service PIC package (xlr64) [14.1R4.8]
JUNOS Services NAT PIC package [14.1R4.8]
JUNOS Services NAT PIC package (xlp64) [14.1R4.8]
JUNOS Services NAT PIC package (xlr64) [14.1R4.8]
JUNOS Services PTSP PIC package [14.1R4.8]
JUNOS Services RPM PIC package [14.1R4.8]
JUNOS Services RPM PIC package (xlp64) [14.1R4.8]
JUNOS Services Stateful Firewall PIC package [14.1R4.8]
JUNOS Services Stateful Firewall PIC package (xlp64) [14.1R4.8]
JUNOS Services Stateful Firewall PIC package (xlr64) [14.1R4.8]
JUNOS BSG PIC package [14.1R4.8]
JUNOS Services Crypto Base PIC package [14.1R4.8]
JUNOS Services Crypto Base PIC package [14.1R4.8]
JUNOS Services Crypto Base PIC package(xlr64) [14.1R4.8]
JUNOS Services IPSec PIC package [14.1R4.8]
JUNOS Services IPSec PIC package [14.1R4.8]
JUNOS Services IPSec PIC(xlr64) package [14.1R4.8]
JUNOS Services SSL PIC package [14.1R4.8]
JUNOS Packet Forwarding Engine Trio Simulation Package [14.1R4.8]