The Federated Identity feature of VMware Cloud on AWS can be integrated with all 3rd party IdPs who support SAML version 2.0. In this integration model, the customer dedicated vIDM tenant will work as SAML Service Provider. If the 3rd party IdP is set up to perform multi-factor authentication (MFA), the customer will be prompted MFA for access to VMware Cloud services. In this blog, the integration with one of the most popular IdP Okta will be demoed.
The Okta IdP settings in this blog are to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirements.
To add the same users and user groups in Okta IdP as the configured vIDM tenant, we need to integrate Okta with corporate Active Directory (AD). The integration is via Okta’s lightweight agent.
Click the “Directory Integration” in Okta UI.
Click “Add Active Directory”.
The Active Directory integration setup wizard will start and click “Set Up Active Directory”.
Download the agent as required in the below window.
This agent can be installed on a Windows Server 2008 R2 or later. The installation of this Okta agent is quite straightforward. Once the agent installation is completed, you need to perform the setup of this AD integration. In the basic setting window, select the Organizational Units (OUs) that you’d like to sync users or groups from and make sure that “Okta username format” is set up to use User Principle Name (UPN).
In the “Build User Profile” window, select any custom schema which needs to be included in the Okta user profile and click Next.
Click Done to finish the integration setup.
The Okta directory setting window will pop up.
Enable the Just-In-Time provisioning and set the Schedule Import to perform user import every hour. Review and save the setting.
Now go to the Import tab and click “Import Now” to import the users from corporate AD.
As it is the first time to import user/users from customer AD, select “Full Import” and click Import.
When the scan is finished, Okta will report the result. Click OK.
Select the user/users to be imported and confirm the user assignment. Note: the user firstname.lastname@example.org is imported here, who will be used for the final integration testing.
Now it is time to set up the SAML IdP in Okta.
Go to Okta Classic UI application tab and click “Add Application”
Click “Create New App”;
Select Web as the Platform and “SAML 2.0” for Sign on method and click Create;
Type in App name, “csp-vidm” is used as an example as the app name and click Next;
There are two configuration items in the popped up “Create SAML Integration” window which is mandatory. These information can be copied from Identity Provider setting within vIDM tenant.
Go to vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.
Type in the “Identity Provider Name”, here the example name is “Okta01”
Go to the bottom of this IdP creation window and click “Service Provider (SP) Metadata”.
A new window will pop up as the below:
The entity ID and HTTP-POST location are required information for Okta IdP SAML setting. Copy the entity ID URL link into the “Audience URI (SP Entity ID) and HTTP-POST location into “Single sign on URL” in the Okta “Create SAML Integration” window.
Leave all other configuration items as the default and click Next;
In the Feedback window, suggest the newly created app is an internal app and click Finish.
A “Sign On settings” window will pop up as below, click “Identity Provider metadata” link.
The XML file format of Identity Provider metadata shows up. Select all content of this XML file and copy.
Paste the Okta IdP metadata into SAML Metadata and click “Process IdP Metadata” in the vIDM 3rd party identity provider creation window.
The “SAML AuthN Request Binding” and “Name ID format mapping from SAML Response” will be updated as below:
Select “lab.local” directory as users who can authenticate with this new 3rd party IdP and leave the Network as default “All RANGES”. Then create a new authentication method called “Okta Auth” with SAML Context “urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtected“. Please note that the name of this newly created authentication method has to be different from any existing authentication method.
Then leave all other configuration items’ box unchecked and click Add.
The 3rd party IdP has been successfully added now.
The last step of vIDM set up for this Okta integration is updating the default access policy to use the newly defined authentication method “Okta Auth”. Please follow up the steps in my previous blog to perform the required update. The updated default access policy should be similar as below.
Before going to test the setup, go to Okta UI to assign user/s to the newly defined SAML 2.0 web application “csp-vidm”. Click Assignment.
Click Assign and select “Assign to People”.
In the “Assign csp-vidm to People” window, assign user John Smith (email@example.com), which means that the user John Smith is allowed by this SAML 2.0 application.
After the assignment is completed, user John Smith is under the assignment of this SAML 2.0 application “csp-vidm”.
Instead of assigning individual users, AD group/groups can be assigned to the SAML application as well.
Finally, everything is ready to test the integration.
Open a new Incognito window in a Chrome browser and type in the vIDM tenant URL then click Enter.
In the log in window, type user name firstname.lastname@example.org and click Next.
The authentication session is redirected to Okta.
Type in Username & Password and click “Sign In”.
Then user John Smith (email@example.com) successfully logs in the vIDM tenant.
This is the end of this demo. Thank you very much for reading!
VMware Cloud on AWS Federated Identity management supports different kinds of authentication methods. This blog will demo the basic method: authentication with the customer corporate Active Directory (AD).
When VMC on AWS customers use AD for authentication, outbound-only connection mode is highly recommended. This mode does not require any inbound firewall port to be opened: only outbound connectivity from vIDM Connector to VMware SaaS vIDM tenant on port 443 is required. All user and group sync from your enterprise directory and user authentication are handled by the vIDM connector.
To enable outbound-only mode, go to update the settings of the Build-in Identity Provider. In the user section of Built-in Identity Provider settings, select the newly created directory “lab.local” and add the newly created connector “vidmcon01.lab.local”.
After the connetor is added successfully, select Password (cloud deployment) in the “Connector Authentication Methods” and click Save.
Now it is time to update the access policy to use corporate Active Directory to authenticate VMC users.
Go to Identity & Access Management.
Click “Edit DEFAULT POLICY” then the “Edit Policy” window pop up. Click Next.
Click “ADD POLICY RULE”.
Then the “Add Policy Rule” window will pop up. At this stage, just leave the first two configuration items as default: “ALL RANGES” and “ALL Device Types”. In the “and user belong to group(s)” config item, search and add all 3 synced groups (sddc-admins, sddc-operators and sddc-readonly) to allow the users in these 3 groups to log in.
Add Password(cloud deployment) as authentication method.
Use Password(Local Directory) as fallback authentication method and click Save.
There are 3 rules defined in the default access policy. Drag the newly defined rule to the top of the rules table, which will make sure that the new rule is evaluated first when a user tries to log in.
Now the rules table shows as below. Click Next.
Click Save to keep the changes of the default access policy.
You are now good to test your authentication set up. Open a new Incognito window in your Chrome browser and connect to the vIDM URL. Type in the username (firstname.lastname@example.org) and click Next.
Type in the Active Directory password for user email@example.com and click “Sign in”.
Then you can see that firstname.lastname@example.org has successfully logged in the vIDM!
As an enterprise using VMware Cloud Services, you can set up federation with your corporate domain. Federating your corporate domain allows you to use your organization’s single sign-on and identity source to sign in to VMware Cloud Services. You can also set up multi-factor authentication as part of federation access policy settings.
Federated identity management allows you to control authentication to your organization and its services by assigning organization and service roles to your enterprise groups.
Set up a federated identity with the VMware Identity Manager service and the VMware Identity Manager connector, which VMWare provide at no additional charge. The following are the required high-level steps.
Download the VMware Identity Manager (vIDM) connector and configure it for user attributes and group sync from your corporate identity store. Note that only the VMware Identity Manager Connector for Windows is supported.
Configure your corporate identity provider instance using the VMware Identity Manager service.
Register your corporate domain.
This series of blogs will demonstrate how to complete customer end setup of the Federated Identity Management for VMC on AWS.
Install and Setup vIDM connector, which is required for all 4 use cases;
As the 1st blog of this series, I will show you how to install the vIDM connector (version 19.03) on Windows 2012 R2 server and how we achieve the HA for vIDM connector.
a vIDM SaaS tenant. If you don’t have one, please contact VMware customer success representative.
a Window Server (Windows 2008 R2, Windows 2012, Windows 2012 R2 or Windows 2016).
Open the firewall rules for communication from Windows Server to domain controllers and vIDM tenant on port 443.
vIDM connector for Windows installation package. The latest version of vIDM connector is shown below.
Log in to the Windows 2012 R2 server and start the installation:
Click Yes in the “User Account Control” window.
Note the installation package will install the latest major JRE version on on the connector windows server if the JRE has not been installed yet.
The installation process is loading the Installation Wizard.
Click Next in the Installation Wizard window.
Accept the License Agreement as below:
Accept the default of installation destination folder and click Next;
Click Next and leave the “Are you migrating your Connector” box unchecked.
Accept the pop-up hostname and default port for this connector.
As the purpose of VMware Cloud federated identity management, please don’t run the Connector service as domain user account. So leave this “Would you like to run the Connector service as a domain user account?” option box unchecked and click Next.
Click Yes in the pop-up window to confirm from the previous step.
Click Install to begin the installation.
Wait for a few minutes, the installation has completed successfully.
Click Finish. A new window will pop up, which suggests the Connector appliance management URL as below .
In the VMware Identity Manager Appliance Setup wizard, click Continue.
Note: Don’t use Internet Explorer when running the wizard. There is a known bug with IE.
Set passwords for appliance application admin account and click Continue.
Now go to the vIDM tenant, in the tab of Identity & Access Management, click Add Connector.
Type in Connector ID Name and Click “Generate Activation Code”.
Copy the generated activation code and go back to the Connector setup wizard.
Copy the activation code into the Activate Connector Window and click Continue.
Wait for a few minutes then the connector will be activated.
Note: sometimes a 404 error will pop up like the below. As my experience, it is a false alert for Windows 2012 R2. Don’t worry about it.
In VMware Identity Manager tenant, the newly installed connector will show up as below:
Now it is time to set up our connector for user sync.
Step 1: Add Directory
Click Add Directory and select “Add Active Directory over LDAP/IWA”.
Type in “Directory Name”, select “Active Directory over LDAP” and use this directory for user sync and authentication. In the “Directory Search Attribute”, I prefer to use UserPrincipalName than sAMAccountName as the UserPrincipalName option will work for all Federated Identity management use cases, e.g. integration with Active Directory Federation Service and 3rd Party IDP.
Then provide the required Bind User Details and click “Save & Next”
After a few minutes, the domain will pop up. Click Next.
In the Map User Attributes window, accept the setup and click Next
Type in the group DNs and click “Find Groups”.
Click the “0 of 23” under the column “Groups to sync”.
Select 3 user groups which need to be synced and click Save.
Accept the default setting in the “Select the Users you would like to sync” window and click Next.
In the Review window, click “Sync Directory”
Now it is time to verify that the synced users and groups in VIDM tenant. Go to the “User & Groups” tab. You can see we have 10 users and 3 groups that are synced from lab.local directory.
You can find the sync log within the configured directory.
Now the basic set up of vIDM connector has been completed.
A single VMware Identity manager is considered as a single point of failure in an enterprise environment. To achieve the high availability of connectors, just install an extra one or multiple connectors, the installation of an extra connector is exactly same as installing the 1st connector. Here, the second connector is installed on another Windows 2012 R2 server vidmcon02.lab.local. After the installation is completed, the activation procedure of the connector is the same as well.
Now 2 connectors will show up in the vIDM tenant.
Go to the Built-in identity provider and add the second connector.
Type in the Bind User Password and click “Add Connector”
Then the second connector is added successfully.
Now there are 2 connectors associated with the Built-in Identity Provider.
Please note connector HA is only for user authentication in version 19.03. Directory or user sync can only be enabled on one connector at a time. In the event of a connector instance failure, authentication is handled automatically by another connector instance. However, for directory sync, you must modify the directory settings in the VMware Identity Manager service to use another connector instance like the below.
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. K8s use network plugin to provide the required networking functions like routing, switching, firewall and load balancing. VMware NSX-T provides a network plugin called NCP for K8s as well. If you want to know more about VMware NSX-T, please go to docs.vmware.com.
In this blog, I will show you how to integrate VMWare NSX-T with Kubernetes.
Here, we will build a three nodes single master K8s cluster. All 3 nodes are RHEL 7.5 virtual machine.
Mgmt IP: 10.1.73.233
Mgmt IP: 10.1.73.234
Mgmt IP: 10.1.73.235
On each node, there are 2 vNICs attached. The first vNIC is ens192 which is for management and the second vNIC is ens224, which is for K8s transport and connected to an overlay logical switch.
NSX-T version: 188.8.131.52.0.10085405;
NSX-T NCP version: 184.108.40.20693410
Docker version: 18.03.1-ce;
K8s version: 1.11.4
1. Prepare K8s Cluster Setup
1.1 Get Offline Packages and Docker Images
As there is no Internet access in my environment, I have to prepare my K8s cluster offline. To do that, I need to get the following packages:
Docker offline installation packages
Kubeadm offline installation packages which will be used to set up the K8s cluster;
Below are the required Docker images for K8s cluster.
You possibly notice that the above includes two identical pause images although these two have different repository names. There is a story around this. Initially, I only got the first image “k8s.gcr.io/pause-amd64” loaded. The setup passed through “kubeadm init” pre-flight but failed at the real cluster setup stage. When I checked the log, I found out that the cluster set up process kept requesting the second image. I guess it is a bug with kubeadm v1.11.0 which I am using.
I put an example here to show how to use “docker pull” CLI to download a docker image in case you don’t know how to do it.
docker pull k8s.gcr.io/kube-proxy-amd64:v1.11.4
Once you have all Docker images, you need to export these Docker images as offline images via “docker save”.
docker save k8s.gcr.io/pause-amd64:3.1 -o /pause-amd64:3.1.docker
Now it is time to upload all your installation packages and offline images to all your K8s 3 nodes including master node.
1.2 Disable SELinux and Firewalld
# disable SELinux
# Change SELINUX to permissive for /etc/selinux/config
# Stop and disable firewalld
systemctl disable firewalld && systemctl stop firewalld
1.3 Config DNS Resolution
# Update the /etc/hosts file as below on all three K8s nodes
1.4 Install Docker and Kubeadm
To install Docker and Kubeadm, first you put all required packages for Docker or kubeadm into a different directory. For example, all required packages for kubeadm are put into a directory called kubeadm. Then use rpm to install kubeadm as below:
Please note replacefiles option is required as a known bug with NSX-T 2.3. If you don’t include the replacefiles option, you will see an error like below:
[root@master rhel_x86_64]# rpm -i nsx-cni-220.127.116.1193410-1.x86_64.rpm
file /opt/cni/bin/loopback from install of nsx-cni-18.104.22.16893410-1.x86_64 conflicts with file from package kubernetes-cni-0.6.0-0.x86_64
1.6.3 Install and Config OVS
# Go to OpenvSwitch directory
rpm -ivh openvswitch-22.214.171.12468033.rhel75-1.x86_64.rpm
systemctl start openvswitch.service && systemctl enable openvswitch.service
ovs-vsctl add-br br-int
ovs-vsctl add-port br-int ens224 -- set Interface ens224 ofport_request=1
ip link set br-int up
ip link set ens224 up
2. Setup K8s Cluster
Now you are ready to set up your K8s cluster. I will use kubeadm config file to define my K8s cluster when I initiate the K8s cluster setup. Below is the content of my kubeadm config file.
From the above, you can see that Kubernetes version v1.11.4 will be used and the API server IP is 10.1.73.233, which is the master node IP. Run the following CLI from K8s master node to create the K8s cluster.
kubeadm init --config kubeadm.yml
After the K8s cluster is set up, you can join the resting two worker nodes into the cluster via CLI below:
Terraform is a widely adopted Infrastructure as Code tool that allow you to define your infrastructure using a simple, declarative programming language, and to deploy and manage infrastructure across public cloud providers including AWS, Azure, Google Cloud & IBM Cloud and other infrastructure providers like VMware NSX-T, F5 Big-IP etc.
In this blog, I will show you how to leverage Terraform NSX-T provider to define a NSX-T tenant environment in minutes.
To build the new NSX-T environment, I am going to:
Create a new Tier1 router named tier1_router;
Create three logical switches under newly created Tier1 router for web/app/db security zone;
Connect the newly created Tier1 router to the existing Tier0 router;
Create a new network service group including SSH and HTTPs;
Create a new firewall section and add a firewall rule to allow outbound SSH/HTTPs traffic from any workload in web logical switch to any workload in app logical switch;
Firstly, I define a Terraform module as below. Note: Terraform module is normally used to define reusable components. For example, the module which I defined here can be re-used to complete non-prod and prod environment build when you provide different input.
Recently, I had to build an environment which have a kind of real web application running to test LBaaS site affinity solution,. After a few minutes，I made a decision to install a Jenkins container on my testing Centos 7 virtual machines.
Unfortunately, my Centos virtual machines have no Internet access. So I spent a bit of time to work out how to installl docker and run a container offline on Centos 7. Then I have this blog which maybe can help others who have the same challenge.
The docker version which I am going to install is: docker-ce-18.03.1.ce-1.el7.centos
On another Linux Centos 7 (minimum install) which have Internet access, I run the CLI below to identify all required packages for Docker offline installation. repoquery -R docker-ce-18.03.1.ce-1.el7.centos From the output, I found out that I need the following packages to complete Docker offline installation:
In this blog, I will show you the routing path for different NSX-T Edge cluster deployment options.
The 1st is the simplest scenario: we have a Edge Cluster and there is not any Tier-1 SR. So we will only have Tier-0 DR and Tier-0 SR running in this NSX Edge Cluster. In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.
In the 2nd scenario, Tier-1 vRouter includes Tier-1 DR and Tier-1 SR. Both Tier-1 SR and Tier-0 SR are running in the same NSX Edge Cluster. This design is to provide NAT, Firewall function at Tier-1 level via Tier1-SR. In the routing path diagram, I used the orange line to show the northbound path and the dark green line to show the southbound path.
In the 3nd scenario, we have 2 Edge clusters:
NSX-T T1 Edge Cluster: dedicated for Tier-1 SR/SRs, which is dedicated for running centralized service (e.g. NAT);
NSX-T T0 Edge Cluster: dedicated for Tier-0 SR/SRs, which provides uplink connectivity to the physical infrastructure;
This option gives better scalability and creates isolated service domains for Tier-0 and Tier-1. Similarly, I used the orange line to show the northbound path and the dark green line to show the southbound path in the diagram below:
With NSX L2VPN, you can extend your VLAN/VXLAN across multiple data centers. Even in a non-NSX environment, you can achieve this as well by use of standalone edge. In this blog, I will show you how to set up NSX L2VPN between Standalone Edge and NSX edge.
As the above, we have 1 NSX edge as L2VPN server and 1 standalone edge which resides in the remote DC which is non-NSX environment. Our target is to stretch two VXLAN backed networks (172.16.136.0/24 and 172.16.137.0/24) to 2 VLAN (VLAN100 and VLAN200) backed networks in remote DC via L2VPN. In addition, we will leverage 4 virtual machines for our L2VPN communication testing.
2 virtual machines in NSX environment:
test1000: 10.172.136.100 gw 172.16.136.1 which is connected to VXLAN10032;
test1002: 10.172.137.100 gw 172.16.137.1 which is connected to VXLAN10033;
2 virtual machines in non-NSX environment:
test1001: 10.172.136.101 gw 172.16.136.1 which is connected to a dVS port-group with access vlan 100;
test1003: 10.172.137.101 gw 172.16.137.1 which is connected to a dVS port-group with access vlan 200;
Step 1: Configure NSX Edge as L2VPN Server
Create 2 sub interfaces(sub100: 172.16.136.1/24 and sub200: 172.16.137.1) by two VXLANs under trunk port
Two VXLAN sub-interfaces, please note that 1st sub-interface is mapped to vNic10 and 2nd sub-interface is mapped to vNic11.
Sub-interface sub100: tunnel Id 100/172.16.136.1 (VXLAN 10032)
Sub-interface sub200 tunnel Id 200/172.16.137.1 (VXLAN 10033)
L2VPN Server setting as below:
Listener IP: 172.16.133.1
Listener Port: 443
Encryption Algorithm: AES128-GCM-SHA256
User Id/Password: admin/credential
Stretched Interfaces: sub100 and sub200
Step 2: Deploy and Setup L2VPN virtual appliance
Use standard process of deploying a virtual appliance.
Start the deploy OVF template wizard
Select the standalone Edge ovf file which is downloaded from vmware.com
Accept extra configuration options
Select name and folder
Setup Networks: here we use one dVS port-group for the standalone trunk interface. We will provide more details around the setting of this port-group later
Customize template. We will configure L2VPN client here as well.
The configuration includes multiple parts:
Part1: standalone edge admin credentials:
Part2: standalone edge network setting:
Part 3: L2VPN setting, which required to exactly match the L2VPN server configuration which you did in Step1 including cipher suite, L2VPN Server address/service port and L2VPN username/password for authentication
Part4: L2VPN Sub Interfaces
Part5: other setting, e.g. proxy if your standalone edge need proxy to establish connectivity to L2VPN server.
Accept all setting and submit for the standalone edge deployment.
Once the standalone edge deployment is completed and powered on, you should be able to see the L2VPN tunnel is up either on NSX edge L2VPN server or standalone edge via CLI (show service l2vpn).
On NSX edge L2VPN server:
On standalone edge:
Step 3: Verification of communication
I simply use PING to verify the communication. My initial test is failed. Yes, you still need to configure port group DPortGroup_ClientTrunk to support L2VPN although L2VPN tunnel is up. You don’t need to do the same for NSX edge as it is completed automatically for you when you configure L2VPN on it.
From NSX-v version 6.4.0, NSX API begins to support JSON format for its response not like before only XML format. From my own expereince, I prefer to use JSON format than XML format as it is easier to decode and encode JSON data than XML data. So I took 1 weekend to re-write my old Python code. Now this code can get Json format NSX-V DFW rules from NSX manager and then place into a CSV file so that you can view and search your DFW rules easily.
Below is a sample of CSV file which is generated by my Python code.