Install PowerCLI and PowerNSX Offline on RHEL7

With the release of PowerCLI 10.0.0, VMware adds support for Mac OS and Linux! Now you can install PowerCLI and PowerNSX on Linux System including RHEL, Centos, Unbuntu and Mac OS. To complete installation of VMware PowerCLI 10 and PowerNSX, firstly you need to install Powershell Core 6.0.

In most of enterprise environments, we won’t be so lucky to have Internet access for all your Redhat RHEL systems. In this blog, I will show you how to install Powershell, PowerCLI and PowerNSX offline on Red Hat Enterprise Linux Server.

Software version:

Red Hat Enterprise Linux Server release 7.5 (Maipo)

PowerShell v6.0.2

VMware PowerCLI 10.1.1

VMware PowerNSX 3.0.1110

Step 0: Prerequisite

You have another Windows workstation/server or Linux which have Internet access and Powershell installed so that we can download all required packages.

In addition, make sure that your RHEL meet the following prerequisites:

  • openssl devel (version 1.0.2k and above) package installed

[root@localhost Powershell]# rpm -qa | grep openssl
openssl-1.0.2k-12.el7.x86_64
xmlsec1-openssl-1.2.20-7.el7_4.x86_64
openssl-libs-1.0.2k-12.el7.x86_64
openssl-devel-1.0.2k-12.el7.x86_64

  • “Development tools” packages installed

You can find out which packages are included in the “Development Tools” packages by CLI: yum group info “Development Tools”

Step 1: Install PowerShell v6.0.2

Go to website https://packages.microsoft.com/rhel/7/prod/ to download the required packages including dotnet and powershell.

dotnet

pwsh

  • Installed the following dotnet packages via “rpm -ivh”

[root@localhost yum.repos.d]# rpm -qa | grep dotn
dotnet-runtime-2.0.5-2.0.5-1.x86_64
dotnet-runtime-deps-2.1-2.1.0-1.x86_64
dotnet-hostfxr-2.0.5-2.0.5-1.x86_64
dotnet-sdk-2.1.4-2.1.4-1.x86_64
dotnet-host-2.1.0-1.x86_64

  • Install  Powershell 6.0.2

rpm -ivh powershell-6.0.2-1.rhel.7.x86_64.rpm

After you successfully installed Powershell, you need to create “Modules” directory for PowerCLI and PowerNSX modules. This “Modules” directory is under your home directory: /home/username/.local/share/powershell/Modules for current user or /usr/local/share/powershell/Modules for all users.

Step 2: Install PowerCLI Core

Since PowerCLI version 6.5, you can’t download the PowerCLI package from VMware directly any longer. You have to connect to PowerShell Gallery via Internet to install PowerCLI. As our RHEL has no Internet access. We firstly need to use “Save-Module” to download the latest PowerCLI package then upload to our RHEL system for installation.

Save-Module -Name VMware.PowerCLI -Path /root/powershell/powercli10

After uploading all sub-directories to the RHEL server, you copy all directories/files under the “Modules” directory which you created in Step 1.

[root@localhost powershell]# cd Modules/
[root@localhost Modules]# ls -al
total 4
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 .
drwxr-xr-x. 5 root root 54 Jun 18 19:51 ..
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.DeployAutomation
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.ImageBuilder
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.PowerCLI
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.Vim
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Cis.Core
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Cloud
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Common
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Core
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.VimAutomation.HA
drwxr-xr-x. 3 root root 27 Jun 19 08:51 VMware.VimAutomation.HorizonView
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.License
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Nsxt
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.PCloud
drwxr-xr-x. 3 root root 28 Jun 19 08:51 VMware.VimAutomation.Sdk
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Srm
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Storage
drwxr-xr-x. 3 root root 21 Jun 19 08:52 VMware.VimAutomation.StorageUtility
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Vds
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.Vmc
drwxr-xr-x. 3 root root 28 Jun 19 08:52 VMware.VimAutomation.vROps
drwxr-xr-x. 3 root root 27 Jun 19 08:52 VMware.VumAutomation

Now your PowerCLI is nearly ready for use.

Issue “pwsh” from bash then you start PowerShell

[root@localhost Modules]# pwsh
PowerShell v6.0.2
Copyright (c) Microsoft Corporation. All rights reserved.

https://aka.ms/pscore6-docs
Type ‘help’ to get help.

PS /root/.local/share/powershell/Modules>

As VMware PowerCLI 10 release notes, not all modules are supported with PowerShell Core 6.0 on RHEL. So before you import the PowerCLI modules, you have to change the  “VMware.PowerCLI.psd1” file to only load supported modules. The location of “VMware.PowerCLI.psd1” file  is as below

[root@localhost 10.1.1.8827524]# pwd
/root/.local/share/powershell/Modules/VMware.PowerCLI/10.1.1.8827524
[root@localhost 10.1.1.8827524]# ls -al
total 64
drwxr-xr-x. 2 root root 115 Jun 19 09:45 .
drwxr-xr-x. 3 root root 28 Jun 19 08:51 ..
-rw-r–r–. 1 root root 15196 Jun 18 21:57 PSGetModuleInfo.xml
-rw-r–r–. 1 root root 16413 Jun 14 10:36 VMware.PowerCLI.cat
-rw-r–r–. 1 root root 11603 Jun 14 10:36 VMware.PowerCLI.ps1
-rw-r–r–. 1 root root 14692 Jun 19 09:45 VMware.PowerCLI.psd1

Edit the above file like below (comment each line which include the un-supported module by adding # in the beginning)

# Modules that must be imported into the global environment prior to importing this module
RequiredModules = @(
@{“ModuleName”=”VMware.VimAutomation.Sdk”;”ModuleVersion”=”10.1.0.8342078″}
@{“ModuleName”=”VMware.VimAutomation.Common”;”ModuleVersion”=”10.1.0.8342134″}
@{“ModuleName”=”VMware.Vim”;”ModuleVersion”=”6.7.0.8343295″}
@{“ModuleName”=”VMware.VimAutomation.Core”;”ModuleVersion”=”10.1.0.8344055″}
#@{“ModuleName”=”VMware.VimAutomation.Srm”;”ModuleVersion”=”10.0.0.7893900″}
#@{“ModuleName”=”VMware.VimAutomation.License”;”ModuleVersion”=”10.0.0.7893904″}
@{“ModuleName”=”VMware.VimAutomation.Vds”;”ModuleVersion”=”10.1.0.8344219″}
@{“ModuleName”=”VMware.VimAutomation.Vmc”;”ModuleVersion”=”10.0.0.7893902″}
@{“ModuleName”=”VMware.VimAutomation.Nsxt”;”ModuleVersion”=”10.1.0.8346947″}
#@{“ModuleName”=”VMware.VimAutomation.vROps”;”ModuleVersion”=”10.0.0.7893921″}
@{“ModuleName”=”VMware.VimAutomation.Cis.Core”;”ModuleVersion”=”10.1.0.8377811″}
#@{“ModuleName”=”VMware.VimAutomation.HA”;”ModuleVersion”=”6.5.4.7567193″}
#@{“ModuleName”=”VMware.VimAutomation.HorizonView”;”ModuleVersion”=”7.5.0.8827468″}
#@{“ModuleName”=”VMware.VimAutomation.PCloud”;”ModuleVersion”=”10.0.0.7893924″}
#@{“ModuleName”=”VMware.VimAutomation.Cloud”;”ModuleVersion”=”10.0.0.7893901″}
#@{“ModuleName”=”VMware.DeployAutomation”;”ModuleVersion”=”6.7.0.8250345″}
#@{“ModuleName”=”VMware.ImageBuilder”;”ModuleVersion”=”6.7.0.8250345″}
@{“ModuleName”=”VMware.VimAutomation.Storage”;”ModuleVersion”=”10.1.0.8313015″}
@{“ModuleName”=”VMware.VimAutomation.StorageUtility”;”ModuleVersion”=”1.2.0.0″}
#@{“ModuleName”=”VMware.VumAutomation”;”ModuleVersion”=”6.5.1.7862888″}
)

If we have not removed the unsupported from the list of “Must Import Module”, we will see error like below:

Import-Module : The VMware.ImageBuilder module is not currently supported on the Core edition of PowerShell.
At line:1 char:1
+ import-module VMware.PowerCLI
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (The VMware.Imag… of PowerShell.:String) [Import-Module], RuntimeException
+ FullyQualifiedErrorId : The VMware.ImageBuilder module is not currently supported on the Core edition of PowerShell.,Microsoft.PowerShell.Commands.ImportModuleCommand 

Now you are ready to import PowerCLI modules.

PS /root/.local/share/powershell/Modules> import-module VMware.PowerCLI
$<5> Welcome to VMware PowerCLI!

Log in to a vCenter Server or ESX host: Connect-VIServer
To find out what commands are available, type: Get-VICommand
To show searchable help for all PowerCLI commands: Get-PowerCLIHelp
Once you’ve connected, display all virtual machines: Get-VM
If you need more help, visit the PowerCLI community: Get-PowerCLICommunity

Copyright (C) VMware, Inc. All rights reserved.

PS /root/.local/share/powershell/Modules>

However, when use cmdlet Connect-VIServer to connect vCenter server, you will see an error similar like this:

Error
Connect-VIServer : 06/22/18 11:22:26 AM Connect-VIServer The libcurl library in use (7.29.0) and its SSL backend (“NSS/3.21 Basic ECC”) do not support custom handling of certificates. A libcurl built with OpenSSL is required.

The cause of this error is that RHEL libcurl library is too old which doesn’t support OpenSSL. Please refer the following link which suggests how to fix the above issue by getting curl 7.52.1 installed.

https://www.opentechshed.com/powercli-core-on-centos-7/

[root@localhost ~]# curl –version
curl 7.52.1 (x86_64-pc-linux-gnu) libcurl/7.52.1 OpenSSL/1.0.2k zlib/1.2.7
Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: IPv6 Largefile NTLM NTLM_WB SSL libz UnixSockets HTTPS-proxy

When we try to use “Connect-VIServer” cmdlet again, we see another error. This will happen when you connect to vCenter via IP or your RHEL think the received certificate is not valid:

Connect-VIServer : 6/21/18 11:40:16 AM Connect-VIServer Error: Invalid server certificate. Use Set-PowerCLIConfiguration to set the value for the InvalidCertificateAction option to Ignore to ignore the certificate errors for this connection.
Additional Information: Could not establish trust relationship for the SSL/TLS secure channel with authority ‘10.1.1.2’.
At line:1 char:1
+ Connect-VIServer -Server 10.1.1.2
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : SecurityError: (:) [Connect-VIServer], ViSecurityNegotiationException
+ FullyQualifiedErrorId : Client20_ConnectivityServiceImpl_Reconnect_CertificateError,VMware.VimAutomation.ViCore.Cmdlets.Commands.ConnectVIServer

We have two options here:

  1. Get valid certifcate for vCenter;
  2. Change PowerCLI configuration to disable SSL certificate verification;

Although option 2 is not good from security point of view, I still show you here so that I can go ahead for PowerNSX installation.

PS /root/.local/share/powershell/Modules> Set-PowerCLIConfiguration -InvalidCertificateAction ignore -confirm:$false

Scope ProxyPolicy DefaultVIServerMode InvalidCertificateAction DisplayDeprecationWarnings WebOperationTimeout
Seconds
—– ———– ——————- ———————— ————————– ——————-
Session UseSystemProxy Multiple Ignore True 300
User Ignore
AllUsers

Step 3: Install PowerNSX

  • Create a sub-directory called “PowerNSX” under “Modules” directory

[root@localhost powershell]# cd Modules/
[root@localhost Modules]# ls -al
total 4
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 .
drwxr-xr-x. 5 root root 54 Jun 18 19:51 ..
drwxr-xr-x. 2 root root 48 Jun 19 14:01 PowerNSX

  • Download PowerNSX package from Github (https://github.com/vmware/powernsx) and upload the downloaded zip file to my RHEL server. Then unzip zip file and copy the  following 2 files into PowerNSX directory:

PowerNSX.psd1
PowerNSX.psm1

[root@localhost Modules]# ls -al PowerNSX/
total 1572
drwxr-xr-x. 2 root root 48 Jun 19 14:01 .
drwxr-xr-x. 24 root root 4096 Jun 19 13:59 ..
-rwxr-xr-x. 1 root root 15738 Jun 19 14:01 PowerNSX.psd1
-rwxr-xr-x. 1 root root 1588500 Jun 19 14:00 PowerNSX.psm1

Now you are ready to start using PowerNSX on RHEL. In my example, I query the current transport-zone and create a logical switch called PowerNSX within found NSX transport zone.

PS /root/.local/share/powershell/Modules/PowerNSX> Import-Module PowerNSX
PS /root/.local/share/powershell/Modules/PowerNSX> Get-Module

ModuleType Version Name ExportedCommands
———- ——- —- —————-
Manifest 3.1.0.0 Microsoft.PowerShell.Management {Add-Content, Clear-Content, Clear-Item, Clear-ItemProperty…}
Manifest 3.1.0.0 Microsoft.PowerShell.Utility {Add-Member, Add-Type, Clear-Variable, Compare-Object…}
Script 3.0.1110 PowerNSX {Add-NsxDynamicCriteria, Add-NsxDynamicMemberSet, Add-NsxEdgeInterfaceAddress, Add-NsxFirewallExclusionListMember…}
Script 1.2 PSReadLine {Get-PSReadlineKeyHandler, Get-PSReadlineOption, Remove-PSReadlineKeyHandler, Set-PSReadlineKeyHandler…}
Manifest 10.1.1…. VMware.PowerCLI
Script 6.7.0.8… VMware.Vim
Script 10.1.0…. VMware.VimAutomation.Cis.Core {Connect-CisServer, Disconnect-CisServer, Get-CisService}
Script 10.1.0…. VMware.VimAutomation.Common
Script 10.1.0…. VMware.VimAutomation.Core {Add-PassthroughDevice, Add-VirtualSwitchPhysicalNetworkAdapter, Add-VMHost, Add-VMHostNtpServer…}
Script 10.1.0…. VMware.VimAutomation.Nsxt {Connect-NsxtServer, Disconnect-NsxtServer, Get-NsxtService}
Script 10.1.0…. VMware.VimAutomation.Sdk {Get-InstallPath, Get-PSVersion}
Script 10.1.0…. VMware.VimAutomation.Storage {Add-KeyManagementServer, Copy-VDisk, Export-SpbmStoragePolicy, Get-KeyManagementServer…}
Script 1.2.0.0 VMware.VimAutomation.StorageUtility Update-VmfsDatastore
Script 10.1.0…. VMware.VimAutomation.Vds {Add-VDSwitchPhysicalNetworkAdapter, Add-VDSwitchVMHost, Export-VDPortGroup, Export-VDSwitch…}
Script 10.0.0…. VMware.VimAutomation.Vmc {Connect-Vmc, Disconnect-Vmc, Get-VmcService}

PS /root/.local/share/powershell/Modules/PowerNSX> Connect-NsxServer -vCenterServer 10.1.1.2

Windows PowerShell credential request
vCenter Server SSO Credentials
Password for user user1@davidwzhang.com: ***********
 
Using existing PowerCLI connection to 10.1.1.2
 
 
Version             : 6.4.0
BuildNumber         : 7564187
Credential          : System.Management.Automation.PSCredential
Server              : 10.1.1.4
Port                : 443
Protocol            : https
UriPrefix           :
ValidateCertificate : False
VIConnection        : 10.1.1.2
DebugLogging        : False
DebugLogfile        : \PowerNSXLog-user1@davidwzhang.com:@-2018_06_15_15_25_45.log
 

PS /root/.local/share/powershell/Modules/PowerNSX> Get NsxTransportZone                                                                 

objectId           : vdnscope-1
objectTypeName     : VdnScope
vsmUuid            : 42267595-0C79-1E95-35FE-E0A186F24C3B
nodeId             : 0598778a-9c46-46e7-a9c7-850beb6ac7f3
revision           : 14
type               : type
name               : transport-1
description        : transport-1
clientHandle       :
extendedAttributes :
isUniversal        : false
universalRevision  : 0
id                 : vdnscope-1
clusters           : clusters
virtualWireCount   : 59
controlPlaneMode   : UNICAST_MODE
cdoModeEnabled     : false
cdoModeState       : cdoModeState
 
PS /root/.local/share/powershell/Modules/PowerNSX> Get-NsxTransportZone  transport-1 | New-NsxLogicalSwitch -name PowerNSX
objectId              : virtualwire-65
objectTypeName        : VirtualWire
vsmUuid               : 42267595-0C79-1E95-35FE-E0A186F24C3B
nodeId                : 0598778a-9c46-46e7-a9c7-850beb6ac7f3
revision              : 2
type                  : type
name                  : PowerNSX
description           :
clientHandle          :
extendedAttributes    :
isUniversal           : false
universalRevision     : 0
tenantId              :
vdnScopeId            : vdnscope-1
vdsContextWithBacking : vdsContextWithBacking
vdnId                 : 6059
guestVlanAllowed      : false
controlPlaneMode      : UNICAST_MODE
ctrlLsUuid            : d6f2c975-8927-429c-86f7-3ae0b9ecd9fa
macLearningEnabled    : false

 

When we checked NSX manager, we can see PowerNSX logical switch is created with VXLAN-ID 6059.vxlan

 

Wireshark Filter for SSL Traffic

Useful Wireshark filter for analysis of SSL Traffic.

Client Hello:

ssl.handshake.type == 1

Server Hello:

ssl.handshake.type == 2

NewSessionTicket:

ssl.handshake.type == 4

Certificate:

ssl.handshake.type == 11

CertificateRequest

ssl.handshake.type == 13

ServerHelloDone:

ssl.handshake.type == 14

Note: “ServerHellpDone” means full-handshake TLS session.

Cipher Suites:

ssl.handshake.ciphersuite

Please note:

More and more deployment require more secure mechnism e.g.Perfect Forward Secrecy. To provide PFS, cipher suite need to leverage  Elliptic-curve Diffie–Hellman (ECDH) or Ephemeral Diffie-Hellman during the key exchange.  In those cases, we can’t use private key to de-encrypt the traffic.

 

SR-IOV Performance on Centos7 VM

This blog is to demonstrate network performance (network throughput here only) for a SR-IOV enabled Centos7 virtual machine which is running on vSphere 6. Regarding the vSphere 6.5 support to SR-IOV, please refer the link below:

Single Root I/O Virtualization

My testing environment is on IBM Cloud:

Virtual machine specification:

  • 4 vCPU/16G Memory;
  • OS: Centos Linux release 7.4.1708 (core)
  • Reserve All Guest Memory (this is a mandatory requirment for SR-IOV but I enable it for all testing VMs)

ESXi hosts: we use 2 ESX hosts (host10 and host11) for our testing.  SR-IOV is enabled on a 10G NIC

  • Supermicro PIO-618U-T4T+-ST031,
  • Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
  • 512GB Memory

Host10

ESXi host specification

Host11

ESXi host specification-host11

Testing Tool: IPerf3 version 3.1.3. Default setting is used.

Note: I have only 4 VMs running on 2 vSphere ESXi hosts in my testing environment to remove the impact for resource contention. In addition, all 4 VMs are in the same layer 2 network to remove any potential bottleneck when perform the network throughput testing using IPerf3.

SR-IOV01

Virtual Machine1 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi Host:  host10

[root@networktest0 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)
03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest0 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

NetworkTest0

Virtual Machine2 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi host: host11

[root@networktest1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest1 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: ye

NetworkTest1

Virtual Machine3 (SR-IOV enabled)

  • Hostname: srIOV 
  • IP Address: 10.139.36.180
  • ESXi host: host10

[root@sriov ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)—–same as the ethernet controller (X540-AT2) of vSphere ESXi host

[root@sriov ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

srIOV

Virtual Machine4 (SR-IOV enabled)

  • Hostname: srIOV1
  • IP Address: 10.139.36.181
  • ESXi host: host11

[root@sriov1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)
[root@sriov1 ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
srIOV1

Test1: from Virtual Machine1 to Virtual Machine2:

[root@networktest0 ~]# iperf3 -c 10.139.36.179 -t 300

[ 4] 290.00-291.00 sec 809 MBytes 6.79 Gbits/sec 29 725 KBytes
[ 4] 291.00-292.00 sec 802 MBytes 6.72 Gbits/sec 32 680 KBytes
[ 4] 292.00-293.00 sec 631 MBytes 5.30 Gbits/sec 52 711 KBytes
[ 4] 293.00-294.00 sec 773 MBytes 6.48 Gbits/sec 9 902 KBytes
[ 4] 294.00-295.00 sec 800 MBytes 6.71 Gbits/sec 27 856 KBytes
[ 4] 295.00-296.00 sec 801 MBytes 6.72 Gbits/sec 36 790 KBytes
[ 4] 296.00-297.00 sec 774 MBytes 6.49 Gbits/sec 52 694 KBytes
[ 4] 297.00-298.00 sec 815 MBytes 6.83 Gbits/sec 30 656 KBytes
[ 4] 298.00-299.00 sec 649 MBytes 5.45 Gbits/sec 35 689 KBytes
[ 4] 299.00-300.00 sec 644 MBytes 5.40 Gbits/sec 57 734 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec 10797 sender
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec receiver

iperf Done.
[root@networktest0 ~]#

Test2: from Virtual Machine2 to Virtual Machine1

[root@networktest1 ~]# iperf3 -c 10.139.36.178 -t 300
Connecting to host 10.139.36.178, port 5201
[ 4] local 10.139.36.179 port 54844 connected to 10.139.36.178 port 5201

[ 4] 290.00-291.00 sec 794 MBytes 6.66 Gbits/sec 6 908 KBytes
[ 4] 291.00-292.00 sec 811 MBytes 6.80 Gbits/sec 8 871 KBytes
[ 4] 292.00-293.00 sec 810 MBytes 6.80 Gbits/sec 10 853 KBytes
[ 4] 293.00-294.00 sec 810 MBytes 6.79 Gbits/sec 12 819 KBytes
[ 4] 294.00-295.00 sec 811 MBytes 6.80 Gbits/sec 19 783 KBytes
[ 4] 295.00-296.00 sec 810 MBytes 6.79 Gbits/sec 14 747 KBytes
[ 4] 296.00-297.00 sec 776 MBytes 6.51 Gbits/sec 9 639 KBytes
[ 4] 297.00-298.00 sec 778 MBytes 6.52 Gbits/sec 7 874 KBytes
[ 4] 298.00-299.00 sec 809 MBytes 6.78 Gbits/sec 13 851 KBytes
[ 4] 299.00-300.00 sec 810 MBytes 6.80 Gbits/sec 11 810 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec 4269 sender
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec receiver

iperf Done.

Test3:  from Virtual Machine3 to Virtual Machine4

[root@sriov ~]# iperf3 -c 10.139.36.181 -t 300 -V
iperf 3.1.3
Linux sriov 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:15:18 GMT
Connecting to host 10.139.36.181, port 5201
Cookie: sriov.1511072118.047298.4aefd6730c42
TCP MSS: 1448 (default)
[ 4] local 10.139.36.180 port 56330 connected to 10.139.36.181 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.09 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.10 MBytes
[ 4] 2.00-3.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.13 MBytes

[ 4] 290.00-291.00 sec 1.06 GBytes 9.14 Gbits/sec 15 1.12 MBytes
[ 4] 291.00-292.00 sec 1.06 GBytes 9.09 Gbits/sec 13 928 KBytes
[ 4] 292.00-293.00 sec 1.05 GBytes 9.00 Gbits/sec 26 1003 KBytes
[ 4] 293.00-294.00 sec 1.07 GBytes 9.22 Gbits/sec 115 1.06 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.23 MBytes
[ 4] 295.00-296.00 sec 1.06 GBytes 9.10 Gbits/sec 79 942 KBytes
[ 4] 296.00-297.00 sec 1.05 GBytes 9.03 Gbits/sec 29 1.02 MBytes
[ 4] 297.00-298.00 sec 1.08 GBytes 9.25 Gbits/sec 6 1005 KBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1005 KBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1005 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec 12656 sender
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec receiver
CPU Utilization: local/sender 13.0% (0.2%u/12.9%s), remote/receiver 41.5% (1.1%u/40.4%s)

iperf Done.

Test4:  from Virtual Machine4 to Virtual Machine3

[root@sriov1 ~]# iperf3 -c 10.139.36.180 -t 300 -V
iperf 3.1.3
Linux sriov1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:30:09 GMT
Connecting to host 10.139.36.180, port 5201
Cookie: sriov1.1511073009.840403.56876d65774
TCP MSS: 1448 (default)
[ 4] local 10.139.36.181 port 46602 connected to 10.139.36.180 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.38 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.40 MBytes

[ 4] 289.00-290.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.31 MBytes
[ 4] 290.00-291.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.31 MBytes
[ 4] 291.00-292.00 sec 1.09 GBytes 9.41 Gbits/sec 329 945 KBytes
[ 4] 292.00-293.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.09 MBytes
[ 4] 293.00-294.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 295.00-296.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.27 MBytes
[ 4] 296.00-297.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 297.00-298.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.38 MBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec 14395 sender
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec receiver
CPU Utilization: local/sender 13.9% (0.2%u/13.7%s), remote/receiver 39.6% (1.0%u/38.6%s)

iperf Done.
[root@sriov1 ~]#

We can see that SR-IOV enabled Centos7 VM can achieve ~9.3Gbits/s throughput for both inbound and outbound traffic, which is very close to wire speed forwarding for a 10G port.

Create XML file in vRealize Orchestrator for NSX Automation

NSX API uses XML format for API communication. To automate NSX in VMware vRealize Orchestror, it is always required to create a XML file with javascript  as vRO workflow supports javascript only.Here i only shows you an example to how to do it.

The target here is to create a security group and add a simple firewall rule in this newly created security group.

Note: this vRO workflow has 2 inputs:
securityGroupName, description
And 2 properties:
nsxManagerRestHost, realtime(equal to sgID in Step1)

Step1: create a security group

var xmlbody = new XML('<securitygroup />');
xmlbody.objectId = " ";
xmlbody.type.typeName = " ";
xmlbody.description = description;
xmlbody.name = securityGroupName;
xmlbody.revision = 0;
xmlbody.objectTypeName = " ";
System.log(xmlbody);
var request = nsxManagerRestHost.createRequest("POST", "/2.0/services/securitygroup/bulk/globalroot-0", xmlbody.toString());
request.contentType = "application/xml";
System.log("Creating a SecurityGroup " + securityGroupName);
System.log("POST Request URL: " + request.fullUrl);
var response = request.execute();
if (response.statusCode == 201) {
	System.debug("Successfully created Security Group " + securityGroupName);
	}
else {
	throw("Failed to SecurityGroup " + securityGroupName);
	}
sgID = response.getAllHeaders().get("Location").split('/').pop();
realtime=sgID

Step2: add a section in DFW and add a firewall rules

//create XML object for DFW source;
var rulesources = new XML('<sources excluded="false" />');
rulesources.source.name = " ";
rulesources.source.value = "10.47.161.23";
rulesources.source.type = "Ipv4Address";
rulesources.source.isValid = 'true';
System.log("Source: "+rulesources);

//create XML object for DFW destination;
var ruledestionations = new XML('<destinations excluded="false" />');
ruledestionations.destination.name = " ";
ruledestionations.destination.value = "10.47.161.24";
ruledestionations.destination.type = "Ipv4Address";
ruledestionations.destination.isValid = 'true';
System.log("Destination: "+ruledestionations);

//create XML object for DFW service
var ruleservices = new XML('<services />');
ruleservices.service.destinationPort = "80";
ruleservices.service.protocol = "6";
ruleservices.service.subProtocol = "6";
ruleservices.service.isValid = 'true';
System.log("Service: "+ruleservices);

//create XML object for the whole rule
var xmlbodyrule = new XML('<rule disabled="false" logged="true" />');
xmlbodyrule.name = "vro created rule";
xmlbodyrule.action = "allow";
xmlbodyrule.notes = " ";
xmlbodyrule.appliedToList.appliedTo.name = securityGroupName;
xmlbodyrule.appliedToList.appliedTo.value = realtime;
xmlbodyrule.appliedToList.appliedTo.type = 'SecurityGroup';
xmlbodyrule.appliedToList.appliedTo.isValid = 'true';
xmlbodyrule.sectionId = " ";
xmlbodyrule.sources = rulesources;
xmlbodyrule.destinations = ruledestionations;
xmlbodyrule.services = ruleservices;

//create XML object for DFW section
var xmlbody = new XML(
<section name ={securityGroupName} />);
//xmlbody.rule = 'disabled="false" logged="true" />';
xmlbody.rule=xmlbodyrule;
System.log("XML file for new rules: "+xmlbody);

var request = nsxManagerRestHost.createRequest("POST", "/4.0/firewall/globalroot-0/config/layer3sections", xmlbody.toString());
request.contentType = "application/xml";
var response = request.execute();
if (response.statusCode == 201) {
	System.debug("Successfully created Security Group Section" + securityGroupName);
	}
else {
	throw("Failed to SecurityGroup Section" + securityGroupName);
	}

Below is the output of XML file for creating a security group:

<securitygroup>
  <objectId></objectId>
  <type>
    <typeName></typeName>
  </type>
  <description>nsx1001test</description>
  <name>nsx1001test</name>
  <revision>0</revision>
  <objectTypeName></objectTypeName>
</securitygroup>

XML file for creating a NSX DFW section and adding a new simple firewall rules:

<section name="nsx1001test">
  <rule disabled="false" logged="true">
    <name>vro created rule</name>
    <action>allow</action>
    <notes></notes>
    <appliedToList>
      <appliedTo>
        <name>nsx1001test</name>
        <value>securitygroup-947</value>
        <type>SecurityGroup</type>
        <isValid>true</isValid>
      </appliedTo>
    </appliedToList>
    <sectionId></sectionId>
    <sources excluded="false">
      <source>
        <name></name>
        <value>10.47.161.23</value>
        <type>Ipv4Address</type>
        <isValid>true</isValid>
      </source>
    </sources>
    <destinations excluded="false">
      <destination>
        <name></name>
        <value>10.47.161.24</value>
        <type>Ipv4Address</type>
        <isValid>true</isValid>
      </destination>
    </destinations>
    <services>
      <service>
        <destinationPort>80</destinationPort>
        <protocol>6</protocol>
        <subProtocol>6</subProtocol>
        <isValid>true</isValid>
      </service>
    </services>
  </rule>
</section>

NSX Load Balancer Qucik Summary

Recently, I was asked a lot of questions around the capability of NSX load balancer by team and customers. So I put a quick summary of NSX load balancer to ease my life.

NSX can perform L4 or L7 load balancing:

  • L4 Load Balancing (packet-based load balancing) : support TCP and UDP load balancing, which is based on Linux Virtual Server.
  • L7 Load Balancing (socket-based load balancing): Support TCP and TCP-based application (e.g. HTTPs_ load balancing, which is based on HAProxy.

Regarding SSL load balancing, it requests L7 load balancing.

3 options for SSL load balancing:

  • SSL Passthrough:
    • NSX load balancer won’t terminate the client session and only pass through the SSL traffic;
    • Session persistence: SSL session id or source IP
  • SSL Offload:
    • client SSL session will be terminated on NSX load balancer and a clear-text (e.g. HTTP) session will be initiated from NSX load balancer to backend server;
    • Session persistence: cookie, SSL session id or source IP
  • SSL end to end:
    • client SSL session will be terminated on NSX load balancer and a new SSL session will be initiated from NSX load balancer to backend server;
    • Session persistence: cookie, SSL session id or source IP

Tips:

  1. L4 and L7 virtual server can co-exist on the same NSX load balancer;
  2. NSX load balancer can use 1 or multiple security groups as pool member, which means Virtual machines will be added into the load balancing pool automatically if they are added into right security group; This feature is especially useful when your Cloud VM is re-provisioned and its IP is changed;
  3. Transparent mode load balancing is not recommended due to the complexity and potential performance issue;
  4. In proxy mode, you can try to use HTTP x-forwarded-for to maintain the source IP information in the request;

Limitation and Constraints:

  1. Don’t supprt the integration with HSM;
  2. As NSX load balancer use the secondary IPs of vNIC, the size of virtual IP can’t scale up well;
  3. Lack of fine security control for traffic to virtual server;
  4. NSX can’t provide good service monitoring like F5 BIGIP or Citrix Netscaler;

 

New Ansible F5 HTTPs Health Monitor Module

Just got time this weekend to test the newly released dev version of Ansible F5 HTTPs health monitor. The result of testing looks good: most of common use cases have been covered properly.

Below is my first playbook for my testing:

# This version is to create a new https health monitor
---
- name: f5 config
  hosts:  lb.davidwzhang.com
  connection: local
  vars:
    ports:
      - 443
  tasks:
    - name: creat http healthmonitor
      bigip_monitor_https:
        state:  "present"
        #state: "absent"
        name: "ansible-httpshealthmonitor"
        password: "password"
        server: "10.1.1.122"
        user: "admin"
        validate_certs: "no"
        send: "Get /cgi-bin/env.sh HTTP/1.1\r\nHost:192.168.72.28\r\nConnection: Close\r\n"
        receive: "web"
        interval: "3"
        timeout: "10"
      delegate_to:  localhost

After run the playbook, I log in my F5 BIGIP VE and see the https health monitor has been created successfully.
f5 https healthmonitor

I tried to create another HTTPs health monitor, which includes basic authentication(admin/password) and customized alias address and alias service port(8443).
Playbook:

# This version is to create a new HTTP health monitor
---
- name: f5 config
  hosts:  lb.davidwzhang.com
  connection: local
  vars:
    ports:
      - 443
  tasks:
    - name: creat http healthmonitor
      bigip_monitor_https:
        state:  "present"
        #state: "absent"
        name: "ansible-httpshealthmonitor02"
        password: "password"
        server: "10.1.1.122"
        user: "admin"
        validate_certs: "no"
        ip: "192.168.72.128"
        port: "8443"
        send: "Get /cgi-bin/env.sh\r\n"
        receive: "200"
        interval: "3"
        timeout: "10"
        target_username: "admin"
        target_password: "password"
      delegate_to:  localhost

In F5, you can see the below:
f5 https healthmonitor02

In addition, you possibly noticed that I comment a line in the above 2 playbooks:

#state: "absent"

You can use it to remove the health monitor.

vRA7.3 and NSX Integration: Network Security Data Collection Failure

We are building vRA 7.3 . We added vCenter and NSX manager as endpoint in vRA. And associate NSX manager with vCenter. All of computing resource data collection works well but not NSX (network and security):

So in vRA reservation, we only can see vSphere cluster, vDS port-group/logical switch but not Transport zone, security group/tags

When check the log, we see the following:

Workflow ‘vSphereVCNSInventory’ failed with the following exception:

One or more errors occurred.

Inner Exception: An error occurred while sending the request.

at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)

at DynamicOps.VCNSModel.Interface.NSXClient.GetDatacenters()

at DynamicOps.VCNSModel.Activities.CollectDatacenters.Execute(CodeActivityContext context)

at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)

at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

Inner Exception:

VCNS Workflow failure

I tried to delete NSX end point and recreate from vRA but no luck. I raised the issue in vmware community but can’t get any real valuable feedback.

After a few hours investigation, finally I find a fix:

run the “create a NSX endpoint” workflow in vRO as the below

2017-07-26_184701

Then I re-start network & security data collection in vRA. Everything works and I can see all defined NSX Transport Zone, security groups and DLR in vRA network reservations.

Hope this fix can help others who have the same issue.