Setting Up Federated Identity Management for VMC on AWS – Authentication with Okta IdP

The Federated Identity feature of VMware Cloud on AWS can be integraed with all 3rd party IdPs who support SAML version 2.0. In this integration model, the customer dedicated vIDM tenant will work as SAML Service Provider. If the 3rd party IdP is set up to perform multi-factor authentication (MFA), the customer will be prompted MFA for access to VMware Cloud services. In this blog, the integration with one of the most popular IdP Okta will be demoed.

Disclaimer:

(1) This blog is my personal blog, which doesn’t represent my employer.(2) The Okta IdP setting in this blog is to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirement.

Note: please complete the first part of Intergation as my first blog (https://wordpress.com/block-editor/post/davidwzhang.com/3080) of this series before moving forward.

To add the same users and user groups in Okta IdP as the configured vIDM tenant, we need to integrate Okta with corporate Active Directory (AD). The integration is via Okta’s lightweight agent.

Click the “Directory Integration” in Okta UI.

Click “Add Active Directory”.

The Active Directory integration setup wizard will start and click “Set Up Active Directory”.

Download the agent as required in the below window.

This agent can be installed on a Windows Server 2008 R2 or later. The installation of this Okta agent is quite straightforward. Once the agent installation is completed, you need to perform the setup of this AD integration. In the basic setting window, select the Organizational Units (OUs) that you’d like to sync users or groups from and make sure that “Okta username format” is set up to use User Principle Name (UPN).

In the “Build User Profile” window, select any custom schema which needs to be included in the Okta user profile and click Next.

Click Done to finish the integration setup.

The Okta directory setting window will pop up.

Enable the Just-In-Time provisioning and set the Schedule Import to perform user import every hour. Review and save the setting.

Now go to the Import tab and click “Import Now” to import the users from corporate AD.

As it is the first time to import user/users from customer AD, select “Full Import” and click Import.

When the scan is finished, Okta will report the result. Click OK.

Select the user/users to be imported and confirm the user assignment. Note: the user jsmith@lab.local is imported here, who will be used for the final integration testing.

Now it is time to set up the SAML IdP in Okta.

Go to Okta Classic UI application tab and click “Add Application”

Click “Create New App”;

Select Web as the Platform and “SAML 2.0” for Sign on method and click Create;

Type in App name, “csp-vidm” is used as an example as the app name and click Next;

There are two configuration items in the popped up “Create SAML Integration” window which is mandatory. These information can be copied from Identity Provider setting within vIDM tenant.

Go to vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.

Type in the “Identity Provider Name”, here the example name is “Okta01”

Go to the bottom of this IdP creation window and click “Service Provider (SP) Metadata”.

A new window will pop up as the below:

The entity ID and HTTP-POST location are required information for Okta IdP SAML setting. Copy the entity ID URL link into the “Audience URI (SP Entity ID) and HTTP-POST location into “Single sign on URL” in the Okta “Create SAML Integration” window.

Leave all other configuration items as the default and click Next;

In the Feedback window, suggest the newly created app is an internal app and click Finish.

A “Sign On settings” window will pop up as below, click “Identity Provider metadata” link.

The XML file format of Identity Provider metadata shows up. Select all content of this XML file and copy.

Paste the Okta IdP metadata into SAML Metadata and click “Process IdP Metadata” in the vIDM 3rd party identity provider creation window.

The “SAML AuthN Request Binding” and “Name ID format mapping from SAML Response” will be updated as below:

Select “lab.local” directory as users who can authenticate with this new 3rd party IdP and leave the Network as default “All RANGES”. Then create a new authentication method called “Okta Auth” with SAML Context “urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtected“. Please note that the name of this newly created authentication method has to be different from any existing authentication method.

Then leave all other configuration items’ box unchecked and click Add.

The 3rd party IdP has been successfully added now.

The last step of vIDM set up for this Okta integration is updating the default access policy to use the newly defined authentication method “Okta Auth”. Please follow up the steps in my previous blog (https://wordpress.com/block-editor/post/davidwzhang.com/308) to perform the required update. The updated default access policy should be similar as below.

Before going to test the setup, go to Okta UI to assign user/s to the newly defined SAML 2.0 web application “csp-vidm”. Click Assignment.

Click Assign and select “Assign to People”.

In the “Assign csp-vidm to People” window, assign user John Smith (jsmith@lab.local), which means that the user John Smith is allowed by this SAML 2.0 application.

After the assignment is completed, user John Smith is under the assignment of this SAML 2.0 application “csp-vidm”.

Instead of assigning individual users, AD group/groups can be assigned to the SAML application as well.

Finally, everything is ready to test the integration.

Open a new Incognito window in a Chrome browser and type in the vIDM tenant URL then click Enter.

In the log in window, type user name jsmith@lab.local and click Next.

The authentication session is redirected to Okta.

Type in Username & Password and click “Sign In”.

Then John Smith (jsmith@lab.local) successfully logs in the vIDM tenant.

This is the end of this demo. Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Authentication with Active Directory

VMware Cloud on AWS Federated Identity management supports different kinds of authentication methods. This blog will demo the basic method: authentication with the customer corporate Active Directory (AD).

When VMC on AWS customers use AD for authentication, outbound-only connection mode is highly recommended. This mode does not require any inbound firewall port to be opened: only outbound connectivity from vIDM Connector to VMware SaaS vIDM tenant on port 443 is required. All user and group sync from your enterprise directory and user authentication are handled by the vIDM connector.

To enable outbound-only mode, go to update the settings of the Build-in Identity Provider. In the user section of Built-in Identity Provider settings, select the newly created directory “lab.local” and add the newly created connector “vidmcon01.​lab.​local”.

After the connetor is added successfully, select Password (cloud deployment) in the “Connector Authentication Methods” and click Save.

Now it is time to update the access policy to use corporate Active Directory to authenticate VMC users.

Go to Identity & Access Management.

Click “Edit DEFAULT POLICY” then the “Edit Policy” window pop up. Click Next.

Click “ADD POLICY RULE”.

Then the “Add Policy Rule” window will pop up. At this stage, just leave the first two configuration items as default: “ALL RANGES” and “ALL Device Types”. In the “and user belong to group(s)” config item, search and add all 3 synced groups (sddc-admins, sddc-operators and sddc-readonly) to allow the users in these 3 groups to log in.

Add Password(cloud deployment) as authentication method.

Use Password(Local Directory) as fallback authentication method and click Save.

There are 3 rules defined in the default access policy. Drag the newly defined rule to the top of the rules table, which will make sure that the new rule is evaluated first when a user tries to log in.

Now the rules table shows as below. Click Next.

Click Save to keep the changes of the default access policy.

You are now good to test your authentication set up. Open a new Incognito window in your Chrome browser and connect to the vIDM URL. Type in the username (jsmith@lab.local) and click Next.

Type in the Active Directory password for user jsmith@lab.local and click “Sign in”.

Then you can see that jsmith@lab.local has successfully logged in the vIDM!

Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Install and Setup vIDM Connector

As an enterprise using VMware Cloud Services, you can set up federation with your corporate domain. Federating your corporate domain allows you to use your organization’s single sign-on and identity source to sign in to VMware Cloud Services. You can also set up multi-factor authentication as part of federation access policy settings.

Federated identity management allows you to control authentication to your organization and its services by assigning organization and service roles to your enterprise groups.

Set up a federated identity with the VMware Identity Manager service and the VMware Identity Manager connector, which VMWare provide at no additional charge.

  1. Download the VMware Identity Manager (vIDM) connector and configure it for user attributes and group sync from your corporate identity store. Note that only the VMware Identity Manager Connector for Windows is supported.
  2. Configure your corporate identity provider instance using the VMware Identity Manager service.
  3. Register your corporate domain.

I am going to create a series of blogs to cover all of 3 steps.

As the 1st blog of this series, I will show you how to install the vIDM connector (version 19.03) on Windows 2012 R2 server and how we achieve the HA for vIDM connector.

Prerequisite

  • a vIDM SaaS tenant. If you don’t have one, please contact VMware customer success representative.
  • a Window Server (Windows 2008 R2, Windows 2012, Windows 2012 R2 or Windows 2016).
  • Open the firewall rules for communication from Windows Server to domain controllers and vIDM tenant on port 443.
  • vIDM connector for Windows installation package. The latest version of vIDM connector is shown below.

Installation

Log in to the Windows 2012 R2 server and start the installation:

Click Yes in the “User Account Control” window.

Note the installation package will install the latest major JRE version on on the connector windows server if the JRE has not been installed yet.

The installation process is loading the Installation Wizard.

Click Next in the Installation Wizard window.

Accept the License Agreement as below:

Accept the default of installation destination folder and click Next;

Click Next and leave the “Are you migrating your Connector” box unchecked.

Accept the pop-up hostname and default port for this connector.

As the purpose of VMware Cloud federated identity management, please don’t run the Connector service as domain user account. So leave this “Would you like to run the Connector service as a domain user account?” option box unchecked and click Next.

Click Yes in the pop-up window to confirm from the previous step.

Click Install to begin the installation.

Wait for a few minutes, the installation has completed successfully.

Click Finish. A new window will pop up, which suggests the Connector appliance management URL as below .

Click Yes. The browser is opened and will redirect to https://vidmconn01.lab.local:8443. Accept the alert of security certificate and continue to this website.

In the VMware Identity Manager Appliance Setup wizard, click Continue.

Set passwords for appliance application admin account and click Continue.

Now go to the vIDM tenant, in the tab of Identity & Access Management, click Add Connector.

Type in Connector ID Name and Click “Generate Activation Code”.

Copy the generated activation code and go back to the Connector setup wizard.

Copy the activation code into the Activate Connector Window and click Continue.

Wait for a few minutes then the connector will be activated.

Note: sometimes a 404 error will pop up like the below. As my experience, it is a false alert for Windows 2012 R2. Don’t worry about it.

In VMware Identity Manager tenant, the newly installed connector will show up as below:

Setup

Now it is time to set up our connector for user sync.

Step 1: Add Directory

Click Add Directory and select “Add Active Directory over LDAP/IWA”.

Type in “Directory Name”, select “Active Directory over LDAP” and use this directory for user sync and authentication. In the “Directory Search Attribute”, I prefer to use UserPrincipalName than sAMAccountName as the UserPrincipalName option will work for all Federated Identity management use cases, e.g. integration with Active Directory Federation Service and 3rd Party IDP.

Then provide the required Bind User Details and click “Save & Next”

After a few minutes, the domain will pop up. Click Next.

In the Map User Attributes window, accept the setup and click Next

Type in the group DNs and click “Find Groups”.

Click the “0 of 23” under the column “Groups to sync”.

Select 3 user groups which need to be synced and click Save.

Click Next.

Accept the default setting in the “Select the Users you would like to sync” window and click Next.

In the Review window, click “Sync Directory”

Now it is time to verify that the synced users and groups in VIDM tenant. Go to the “User & Groups” tab. You can see we have 10 users and 3 groups that are synced from lab.local directory.

You can find the sync log within the configured directory.

Now the basic set up of vIDM connector has been completed.

Connector HA

A single VMware Identity manager is considered as a single point of failure in an enterprise environment. To achieve the high availability of connectors, just install an extra one or multiple connectors, the installation of an extra connector is exactly same as installing the 1st connector. Here, the second connector is installed on another Windows 2012 R2 server vidmcon02.lab.local. After the installation is completed, the activation procedure of the connector is the same as well.

Now 2 connectors will show up in the vIDM tenant.

Go to the Built-in identity provider and add the second connector.

Type in the Bind User Password and click “Add Connector”

Then the second connector is added successfully.

Now there are 2 connectors associated with the Built-in Identity Provider.

Please note connector HA is only for user authentication in version 19.03. Directory or user sync can only be enabled on one connector at a time. In the event of a connector instance failure, authentication is handled automatically by another connector instance. However, for directory sync, you must modify the directory settings in the VMware Identity Manager service to use another connector instance like the below.

Thank you very much for reading!

Failed to Start Libvirtd

Environment:

OS: CentOS Linux release 7.5.1804 (Core)

Error Message:

# journalctl -u libvirtd
— Logs begin at Wed 2019-01-30 17:46:41 AEDT, end at Wed 2019-01-30 18:02:09 AEDT. —
Jan 30 17:47:09 ovs-sandbox2 systemd[1]: Starting Virtualization daemon…
Jan 30 17:47:14 ovs-sandbox2 libvirtd[1483]: 2019-01-30 06:47:14.936+0000: 1483: info : libvirt version: 4.5.0, package: 10.el7_6.3 (CentOS BuildSystem http://bugs.centos.org, 2018-11-28-20:51:39, x86-01.bsys.centos.org)
Jan 30 17:47:14 ovs-sandbox2 libvirtd[1483]: 2019-01-30 06:47:14.936+0000: 1483: info : hostname: ovs-sandbox2
Jan 30 17:47:14 ovs-sandbox2 libvirtd[1483]: 2019-01-30 06:47:14.936+0000: 1483: error : virModuleLoadFile:53 : internal error: Failed to load module ‘/usr/lib64/libvirt/storage-backend/libvirt_storage_backend_rbd.so’: /usr/lib64/libvir
Jan 30 17:47:14 ovs-sandbox2 systemd[1]: libvirtd.service: main process exited, code=exited, status=3/NOTIMPLEMENTED
Jan 30 17:47:14 ovs-sandbox2 systemd[1]: Failed to start Virtualization daemon.
Jan 30 17:47:14 ovs-sandbox2 systemd[1]: Unit libvirtd.service entered failed state.
Jan 30 17:47:14 ovs-sandbox2 systemd[1]: libvirtd.service failed.
Jan 30 17:47:15 ovs-sandbox2 systemd[1]: libvirtd.service holdoff time over, scheduling restart.

When:

The issue happened when I incidentally updated the libvirtd from 3.9.0-14.el7_5.8.x86_64 to 4.5.0-10.el7_6.3.x86_64

Fix:

[root@ovs-sandbox2 /]# yum update librados2

[root@ovs-sandbox2 virtualmachines]

# yum history info 14
Loaded plugins: fastestmirror
Transaction ID : 14
Begin time : Wed Jan 30 18:10:53 2019
Begin rpmdb : 815:0a1f6c4d93558a35ec9c3ceb9114712149f71015
End time : 18:10:54 2019 (1 seconds)
End rpmdb : 817:358974b7c1ae161fe8d05d2d23573b31eaac6582
User : root
Return-Code : Success
Command Line : update librados2
Transaction performed with:
Installed rpm-4.11.3-32.el7.x86_64 @anaconda
Installed yum-3.4.3-158.el7.centos.noarch @anaconda
Installed yum-plugin-fastestmirror-1.1.31-45.el7.noarch @anaconda
Packages Altered:
Dep-Install boost-iostreams-1.53.0-27.el7.x86_64 @base
Dep-Install boost-random-1.53.0-27.el7.x86_64 @base
Updated librados2-1:0.94.5-2.el7.x86_64 @base
Update 1:10.2.5-4.el7.x86_64 @base
Updated librbd1-1:0.94.5-2.el7.x86_64 @base
Update 1:10.2.5-4.el7.x86_64 @base
history info

[root@ovs-sandbox2 virtualmachines]

#

Install Docker Offline on Centos7

Recently, I had to build an environment which have a kind of real web application running to test LBaaS site affinity solution,. After a few minutes,I made a decision to install a Jenkins container on my testing Centos 7 virtual machines. 

Unfortunately, my Centos virtual machines have no Internet access. So I spent a bit of time to work out how to installl docker and run a container offline on Centos 7. Then I have this blog which maybe can help others who have the same challenge.

The docker version which I am going to install is: 
docker-ce-18.03.1.ce-1.el7.centos

On another Linux Centos 7 (minimum install) which have Internet access, I run the CLI below to identify all required packages for Docker offline installation.
repoquery -R docker-ce-18.03.1.ce-1.el7.centos
From the output, I found out that I need the following packages to complete Docker offline installation:

1:libsepol-2.5-8.1.el7
2:libselinux-2.5-12.el7
3:audit-libs-2.8.1-3.el7_5.1
4:libsemanage-2.5-11.el7
5:libselinux-utils-2.5-12.el7
6:policycoreutils-2.5-22.el7
7:selinux-policy-3.13.1-192.el7
8:libcgroup-0.41-15.el7
9:selinux-policy-targeted-3.13.1-19
10:libsemanage-python-2.5-11.el7
11:audit-libs-python-2.8.1-3.el7_5.1
12:setools-libs-3.3.8-2.el7
13:python-IPy-0.75-6.el7
14:pigz-2.3.3-1.el7.centos
15:checkpolicy-2.5-6.el7
16:policycoreutils-python-2.5-22.el7
17:container-selinux-2:2.68-1.el7
18:docker-ce-18.03.1.ce-1.el7.centos
19:audit-2.8.1-3.el7_5.1

Then I went to download docker rpm package and all dependent packages with yumdownloader:
yumdownloader –resolve  docker-ce-18.03.1.ce-1.el7.centos

I archived the above packages (tar cf docker-ce.offline.tar *.rpm) and uploaded to my offline Centos 7 virtual machines. Then use the rpm CLI to install Docker:

[root@lbaas02 ~]# rpm -ivh –replacefiles –replacepkgs *.rpm

warning: audit-2.8.1-3.el7_5.1.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEYwarning: docker-ce-18.03.1.ce-1.el7.centos.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEYPreparing…                          ################################# [100%]Updating / installing…   1:libsepol-2.5-8.1.el7             ################################# [  5%]   2:libselinux-2.5-12.el7            ################################# [ 11%]   3:audit-libs-2.8.1-3.el7_5.1       ################################# [ 16%]   4:libsemanage-2.5-11.el7           ################################# [ 21%]   5:libselinux-utils-2.5-12.el7      ################################# [ 26%]   6:policycoreutils-2.5-22.el7       ################################# [ 32%]   7:selinux-policy-3.13.1-192.el7    ################################# [ 37%]   8:libcgroup-0.41-15.el7            ################################# [ 42%]   9:selinux-policy-targeted-3.13.1-19################################# [ 47%]  10:libsemanage-python-2.5-11.el7    ################################# [ 53%]  11:audit-libs-python-2.8.1-3.el7_5.1################################# [ 58%]  12:setools-libs-3.3.8-2.el7         ################################# [ 63%]  13:python-IPy-0.75-6.el7            ################################# [ 68%]  14:pigz-2.3.3-1.el7.centos          ################################# [ 74%]  15:checkpolicy-2.5-6.el7            ################################# [ 79%]  16:policycoreutils-python-2.5-22.el7################################# [ 84%]  17:container-selinux-2:2.68-1.el7   ################################# [ 89%]  18:docker-ce-18.03.1.ce-1.el7.centos################################# [ 95%]  19:audit-2.8.1-3.el7_5.1            ################################# [100%]

After the installation completed,  started and enabled docker service:

[root@lbaas02 ~]# systemctl enable docker

Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@lbaas02 ~]# systemctl start docker

Now the next question for me is to import the offline Jenkins docker image. Firstly, I pulled the Jenkisn docker image:

docker pull jenkins/jenkins

Then exported the docker image as a file and uploaded to my testing Centos.

docker save -o jenkins.docker jenkins/jenkins

On my testing Centos, I loaded the image to docker process.

[root@lbaas01 ~]# docker load -i jenkins.docker

 f715ed19c28b: Loading layer [==================================================>]  105.5MB/105.5MB 8bb25f9cdc41: Loading layer [==================================================>]  23.99MB/23.99MB 08a01612ffca: Loading layer [==================================================>]  7.994MB/7.994MB 1191b3f5862a: Loading layer [==================================================>]  146.4MB/146.4MB 097524d80f54: Loading layer [==================================================>]  2.332MB/2.332MB 685f72a7cd4f: Loading layer [==================================================>]  3.584kB/3.584kB  9c147c576d67: Loading layer [==================================================>]  1.536kB/1.536kB   e9805f9bdc9e: Loading layer [==================================================>]  356.3MB/356.3MB 8b47d19735d5: Loading layer [==================================================>]  362.5kB/362.5kB e2a15a753d48: Loading layer [==================================================>]  338.9kB/338.9kB 287c6d658570: Loading layer [==================================================>]  3.584kB/3.584kB 5e9d64b80844: Loading layer [==================================================>]  9.728kB/9.728kB   be6e5f898997: Loading layer [==================================================>]  868.9kB/868.9kB  609adfa44126: Loading layer [==================================================>]  4.608kB/4.608kB  a26f92334a9c: Loading layer [==================================================>]  75.92MB/75.92MB de90b90d0715: Loading layer [==================================================>]  4.608kB/4.608kB  13d8fca176c6: Loading layer [==================================================>]  9.216kB/9.216kB   be0781510eef: Loading layer [==================================================>]  4.608kB/4.608kB   d7e644ce9f14: Loading layer [==================================================>]  3.072kB/3.072kB 47dd83bc99e4: Loading layer [==================================================>]  7.168kB/7.168kB  96e3e5ce2959: Loading layer [==================================================>]  12.29kB/12.29kB               Loaded image: jenkins/jenkins:latest

[root@lbaas01 ~]# docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE

jenkins/jenkins     latest              51158f0cf7bc        6 days ago          701MB

Now I am able to start my Jenkins docker on this offline Centos 7.

docker run -d -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins

Wait for 2-3 mins. After Jenkins container is fully running, I can login into my Jenkins.:)

NSX-T Routing Path

In this blog, I will show you the routing path for different NSX-T Edge cluster deployment options.

  • The 1st is the simplest scenario: we have a Edge Cluster and there is not any Tier-1 SR. So we will only have Tier-0 DR and Tier-0 SR running in this NSX Edge Cluster.  In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.

Pattern1

  • In the 2nd scenario, Tier-1 vRouter includes Tier-1 DR and Tier-1 SR. Both Tier-1 SR and Tier-0 SR are running in the same NSX Edge Cluster. This design is to provide NAT, Firewall function at Tier-1 level via Tier1-SR. In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.

Pattern2

 

  • In the 3nd scenario, we have 2 Edge clusters:
    • NSX-T T1 Edge Cluster: dedicated for Tier-1 SR/SRs, which is dedicated for running centralized service (e.g. NAT);
    • NSX-T T0 Edge Cluster: dedicated for Tier-0 SR/SRs, which provides uplink connectivity to the physical infrastructure;

This option gives better scalability and creates isolated service domains for Tier-0 and Tier-1. Similarly, I used the orange line to show the northbound path and the dark green line to show the southbound path in the diagram below:

 

Pattern3

SR-IOV Performance on Centos7 VM

This blog is to demonstrate network performance (network throughput here only) for a SR-IOV enabled Centos7 virtual machine which is running on vSphere 6. Regarding the vSphere 6.5 support to SR-IOV, please refer the link below:

Single Root I/O Virtualization

My testing environment is on IBM Cloud:

Virtual machine specification:

  • 4 vCPU/16G Memory;
  • OS: Centos Linux release 7.4.1708 (core)
  • Reserve All Guest Memory (this is a mandatory requirment for SR-IOV but I enable it for all testing VMs)

ESXi hosts: we use 2 ESX hosts (host10 and host11) for our testing.  SR-IOV is enabled on a 10G NIC

  • Supermicro PIO-618U-T4T+-ST031,
  • Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
  • 512GB Memory

Host10

ESXi host specification

Host11

ESXi host specification-host11

Testing Tool: IPerf3 version 3.1.3. Default setting is used.

Note: I have only 4 VMs running on 2 vSphere ESXi hosts in my testing environment to remove the impact for resource contention. In addition, all 4 VMs are in the same layer 2 network to remove any potential bottleneck when perform the network throughput testing using IPerf3.

SR-IOV01

Virtual Machine1 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi Host:  host10

[root@networktest0 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)
03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest0 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

NetworkTest0

Virtual Machine2 (Standard VM)

  • Hostname: Networktest1 
  • IP Address: 10.139.36.179
  • ESXi host: host11

[root@networktest1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)

[root@networktest1 ~]# ethtool -i ens160
driver: vmxnet3
version: 1.4.7.0-k-NAPI
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: ye

NetworkTest1

Virtual Machine3 (SR-IOV enabled)

  • Hostname: srIOV 
  • IP Address: 10.139.36.180
  • ESXi host: host10

[root@sriov ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)—–same as the ethernet controller (X540-AT2) of vSphere ESXi host

[root@sriov ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no

srIOV

Virtual Machine4 (SR-IOV enabled)

  • Hostname: srIOV1
  • IP Address: 10.139.36.181
  • ESXi host: host11

[root@sriov1 ~]# lspci
00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX Host bridge (rev 01)
00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX – 82443BX/ZX/DX AGP bridge (rev 01)
00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
00:11.0 PCI bridge: VMware PCI bridge (rev 02)
00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)

03:00.0 Ethernet controller: Intel Corporation X540 Ethernet Controller Virtual Function (rev 01)
[root@sriov1 ~]# ethtool -i ens160
driver: ixgbevf
version: 3.2.2-k-rh7.4
firmware-version:
expansion-rom-version:
bus-info: 0000:03:00.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: no
supports-register-dump: yes
supports-priv-flags: no
srIOV1

Test1: from Virtual Machine1 to Virtual Machine2:

[root@networktest0 ~]# iperf3 -c 10.139.36.179 -t 300

[ 4] 290.00-291.00 sec 809 MBytes 6.79 Gbits/sec 29 725 KBytes
[ 4] 291.00-292.00 sec 802 MBytes 6.72 Gbits/sec 32 680 KBytes
[ 4] 292.00-293.00 sec 631 MBytes 5.30 Gbits/sec 52 711 KBytes
[ 4] 293.00-294.00 sec 773 MBytes 6.48 Gbits/sec 9 902 KBytes
[ 4] 294.00-295.00 sec 800 MBytes 6.71 Gbits/sec 27 856 KBytes
[ 4] 295.00-296.00 sec 801 MBytes 6.72 Gbits/sec 36 790 KBytes
[ 4] 296.00-297.00 sec 774 MBytes 6.49 Gbits/sec 52 694 KBytes
[ 4] 297.00-298.00 sec 815 MBytes 6.83 Gbits/sec 30 656 KBytes
[ 4] 298.00-299.00 sec 649 MBytes 5.45 Gbits/sec 35 689 KBytes
[ 4] 299.00-300.00 sec 644 MBytes 5.40 Gbits/sec 57 734 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec 10797 sender
[ 4] 0.00-300.00 sec 206 GBytes 5.89 Gbits/sec receiver

iperf Done.
[root@networktest0 ~]#

Test2: from Virtual Machine2 to Virtual Machine1

[root@networktest1 ~]# iperf3 -c 10.139.36.178 -t 300
Connecting to host 10.139.36.178, port 5201
[ 4] local 10.139.36.179 port 54844 connected to 10.139.36.178 port 5201

[ 4] 290.00-291.00 sec 794 MBytes 6.66 Gbits/sec 6 908 KBytes
[ 4] 291.00-292.00 sec 811 MBytes 6.80 Gbits/sec 8 871 KBytes
[ 4] 292.00-293.00 sec 810 MBytes 6.80 Gbits/sec 10 853 KBytes
[ 4] 293.00-294.00 sec 810 MBytes 6.79 Gbits/sec 12 819 KBytes
[ 4] 294.00-295.00 sec 811 MBytes 6.80 Gbits/sec 19 783 KBytes
[ 4] 295.00-296.00 sec 810 MBytes 6.79 Gbits/sec 14 747 KBytes
[ 4] 296.00-297.00 sec 776 MBytes 6.51 Gbits/sec 9 639 KBytes
[ 4] 297.00-298.00 sec 778 MBytes 6.52 Gbits/sec 7 874 KBytes
[ 4] 298.00-299.00 sec 809 MBytes 6.78 Gbits/sec 13 851 KBytes
[ 4] 299.00-300.00 sec 810 MBytes 6.80 Gbits/sec 11 810 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec 4269 sender
[ 4] 0.00-300.00 sec 237 GBytes 6.79 Gbits/sec receiver

iperf Done.

Test3:  from Virtual Machine3 to Virtual Machine4

[root@sriov ~]# iperf3 -c 10.139.36.181 -t 300 -V
iperf 3.1.3
Linux sriov 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:15:18 GMT
Connecting to host 10.139.36.181, port 5201
Cookie: sriov.1511072118.047298.4aefd6730c42
TCP MSS: 1448 (default)
[ 4] local 10.139.36.180 port 56330 connected to 10.139.36.181 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.09 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.10 MBytes
[ 4] 2.00-3.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.13 MBytes

[ 4] 290.00-291.00 sec 1.06 GBytes 9.14 Gbits/sec 15 1.12 MBytes
[ 4] 291.00-292.00 sec 1.06 GBytes 9.09 Gbits/sec 13 928 KBytes
[ 4] 292.00-293.00 sec 1.05 GBytes 9.00 Gbits/sec 26 1003 KBytes
[ 4] 293.00-294.00 sec 1.07 GBytes 9.22 Gbits/sec 115 1.06 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.23 MBytes
[ 4] 295.00-296.00 sec 1.06 GBytes 9.10 Gbits/sec 79 942 KBytes
[ 4] 296.00-297.00 sec 1.05 GBytes 9.03 Gbits/sec 29 1.02 MBytes
[ 4] 297.00-298.00 sec 1.08 GBytes 9.25 Gbits/sec 6 1005 KBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1005 KBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1005 KBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec 12656 sender
[ 4] 0.00-300.00 sec 327 GBytes 9.37 Gbits/sec receiver
CPU Utilization: local/sender 13.0% (0.2%u/12.9%s), remote/receiver 41.5% (1.1%u/40.4%s)

iperf Done.

Test4:  from Virtual Machine4 to Virtual Machine3

[root@sriov1 ~]# iperf3 -c 10.139.36.180 -t 300 -V
iperf 3.1.3
Linux sriov1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64
Time: Sun, 19 Nov 2017 06:30:09 GMT
Connecting to host 10.139.36.180, port 5201
Cookie: sriov1.1511073009.840403.56876d65774
TCP MSS: 1448 (default)
[ 4] local 10.139.36.181 port 46602 connected to 10.139.36.180 port 5201
Starting Test: protocol: TCP, 1 streams, 131072 byte blocks, omitting 0 seconds, 300 second test
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 0 1.38 MBytes
[ 4] 1.00-2.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.40 MBytes

[ 4] 289.00-290.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.31 MBytes
[ 4] 290.00-291.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.31 MBytes
[ 4] 291.00-292.00 sec 1.09 GBytes 9.41 Gbits/sec 329 945 KBytes
[ 4] 292.00-293.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.09 MBytes
[ 4] 293.00-294.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 294.00-295.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.15 MBytes
[ 4] 295.00-296.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.27 MBytes
[ 4] 296.00-297.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 297.00-298.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
[ 4] 298.00-299.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.38 MBytes
[ 4] 299.00-300.00 sec 1.10 GBytes 9.41 Gbits/sec 0 1.38 MBytes
– – – – – – – – – – – – – – – – – – – – – – – – –
Test Complete. Summary Results:
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec 14395 sender
[ 4] 0.00-300.00 sec 329 GBytes 9.41 Gbits/sec receiver
CPU Utilization: local/sender 13.9% (0.2%u/13.7%s), remote/receiver 39.6% (1.0%u/38.6%s)

iperf Done.
[root@sriov1 ~]#

We can see that SR-IOV enabled Centos7 VM can achieve ~9.3Gbits/s throughput for both inbound and outbound traffic, which is very close to wire speed forwarding for a 10G port.