Automate Avi LB Service with Ansible

Avi Networks load balancing platform offers fantastic automation capabilities, which allow us to automate the load balancing service via some popular Infrastructure as Code tools like Ansible and Terraform. Today, I will demonstrate the Day 1 automation using Ansible (version 2.8.5) in this blog.

[root@code1 ~]# ansible --version
ansible 2.8.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

The link below lists all available Ansible modules for Avi LB automation:

https://docs.ansible.com/ansible/latest/modules/list_of_network_modules.html#avi

Avi have developed a role called avisdk to package all Avi Ansible modules, which eases our lives further. To install this Avi avisdk Ansible role, just run the CLI below:

pip install avisdk

In this blog, we will automate the local load balancing configuration which we configured manually in my other blog: Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part2.

In summary, we are going to:

  • Create an HTTP Health Monitor (sddc01-vs02-hm01).
  • Create an Application Persistence profile (sddc01-vs02-persistence01) based on HTTP cookie.
  • Create a local load balancing pool (sddc01-vs02-pool01) with 2 pool members. The pool member health status will be checked with the newly created health monitor and the session persistence associated with this pool is based on the cookie persistence profile which we defined in the previous step.
  • Create an HTTP application profile (sddc01-vs02-profile-http01) which enables the compression and HTTP to HTTPs redirect.
  • Create an SSL profile (sddc01-vs02-profile-ssl01) which only allows securer ciphers and TLS 1.1 and 1.2.
  • Create a local load balancing service which leverages the newly created HTTP application profile and SSL profile to distribute the traffic to the load balancing pool sddc01-vs02-pool01.

The completed Ansible playbook is as below:

---
- hosts: localhost
  connection: local
  vars:
    controller: 192.168.80.3
    username: admin
    password: Password
    api_version: 18.2.5
    vs_name: sddc01-vs02
    vs_vip: 192.168.96.110
    vs_serviceport01: 443
    vs_serviceport02: 80
    pool_name: sddc01-vs02-pool01
    pool_member01: 192.168.96.25
    pool_member01_hostname: centos01
    pool_member02: 192.168.96.26
    pool_member02_hostname: centos02
    httpprofile_name: sddc01-vs02-profile-http01
    healthmonitor_name: sddc01-vs02-hm01
    cookie_name: sddc01-vs02-cookie01
    persistence_name: sddc01-vs02-persistence01
    certificate_name: www.sddc.vmconaws.link
    sslprofile_name: sddc01-vs02-profile-ssl01
  roles:
    - avinetworks.avisdk
  tasks:
    - name: Create HTTP Health Monitor
      avi_healthmonitor:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{api_version}}"
        state: present
        name: "{{ healthmonitor_name }}"
        http_monitor:
          http_request: 'HEAD / HTTP/1.0'
          http_response_code:
            - HTTP_2XX
            - HTTP_3XX
        receive_timeout: 4
        failed_checks: 3
        send_interval: 10
        successful_checks: 3
        is_federated: false
        type: HEALTH_MONITOR_HTTP
    - name: Create an Application Persistence setting using http cookie
      avi_applicationpersistenceprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{api_version}}"
        http_cookie_persistence_profile:
          always_send_cookie: false
          cookie_name: "{{ cookie_name }}"
          timeout: 15
        name: "{{ persistence_name }}"
        persistence_type: PERSISTENCE_TYPE_HTTP_COOKIE
        server_hm_down_recovery: HM_DOWN_PICK_NEW_SERVER
    - name: Create local load balancing pool
      avi_pool:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ pool_name }}"
        state: present
        application_persistence_profile_ref: '/api/applicationpersistenceprofile?name={{ persistence_name }}'
        health_monitor_refs:
          - '/api/healthmonitor?name={{ healthmonitor_name }}'
        lb_algorithm: LB_ALGORITHM_LEAST_CONNECTIONS
        servers:
          - ip:
              addr: "{{ pool_member01 }}"
              type: V4
            hostname: "{{ pool_member01_hostname }}"
          - ip:
              addr: "{{ pool_member02 }}"
              type: V4
            hostname: "{{ pool_member02_hostname }}"
    - name: Create an HTTP application profile
      avi_applicationprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        state: present
        http_profile:
          compression_profile:
            compressible_content_ref: '/api/stringgroup?name=System-Compressible-Content-Types'
            compression: true 
            remove_accept_encoding_header: true 
            type: AUTO_COMPRESSION
          connection_multiplexing_enabled: true 
          disable_keepalive_posts_msie6: true 
          disable_sni_hostname_check: false 
          enable_fire_and_forget: false 
          enable_request_body_buffering: false 
          enable_request_body_metrics: false 
          fwd_close_hdr_for_bound_connections: true 
          hsts_enabled: false 
          hsts_max_age: 365 
          hsts_subdomains_enabled: true 
          http2_enabled: false 
          http_to_https: true 
          httponly_enabled: false 
          keepalive_header: false 
          keepalive_timeout: 40000 
          max_bad_rps_cip: 0 
          max_bad_rps_cip_uri: 0 
          max_bad_rps_uri: 0 
          max_keepalive_requests: 100 
          max_response_headers_size: 48 
          max_rps_cip: 0 
          max_rps_cip_uri: 0 
          max_rps_unknown_cip: 0 
          max_rps_unknown_uri: 0 
          max_rps_uri: 0 
          post_accept_timeout: 30000 
          respond_with_100_continue: true 
          secure_cookie_enabled: false 
          server_side_redirect_to_https: false 
          spdy_enabled: false 
          spdy_fwd_proxy_mode: false 
          ssl_client_certificate_mode: SSL_CLIENT_CERTIFICATE_NONE 
          ssl_everywhere_enabled: false 
          use_app_keepalive_timeout: false 
          websockets_enabled: true 
          x_forwarded_proto_enabled: false 
          xff_alternate_name: X-Forwarded-For 
          xff_enabled: true
        name: "{{ httpprofile_name }}"
        type: APPLICATION_PROFILE_TYPE_HTTP
    - name: Create SSL profile with list of allow ciphers and TLS version
      avi_sslprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ sslprofile_name }}"
        accepted_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA"
        accepted_versions:
          - type: SSL_VERSION_TLS1_1
          - type: SSL_VERSION_TLS1_2
        send_close_notify: true
        ssl_rating:
          compatibility_rating: SSL_SCORE_EXCELLENT
          performance_rating: SSL_SCORE_EXCELLENT
          security_score: '100.0'
    - name: Create a virtual service
      avi_virtualservice:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ vs_name }}"
        state: present
        performance_limits:
          max_concurrent_connections: 1000
        ssl_profile_ref: '/api/sslprofile?name={{ sslprofile_name }}'
        application_profile_ref: '/api/applicationprofile?name={{ httpprofile_name }}'
        ssl_key_and_certificate_refs:
          - '/api/sslkeyandcertificate?name={{ certificate_name }}'
        vip:
          - ip_address:
              addr: "{{ vs_vip }}"
              type: V4
            vip_id: 1
        services:
          - port: "{{ vs_serviceport01 }}"
            enable_ssl: true
          - port: "{{ vs_serviceport02 }}"
        pool_ref: '/api/pool?name={{ pool_name }}'

When we run the playbook, we can get all configurations completed in just 30 seconds which normally requires at least half hour.

Health Monitor:

Session Persistence:

Load Balancing Pool:

HTTP Application Profile:

SSL Profile:

Virtual Service:

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part4

This blog is Part 4 of this series. If you have not gone through the Part1, Part2 and Part3, please go and check them out now.

In Part3, we set up an active-active global load balancing service for our testing application (https://www.sddc.vmconaws.link).

Some applications require stickiness between a client and a server. That is to say, all requests in a long-lived transaction from a client must be sent to the same server; otherwise, the application session may be broken. Unfortunately, in an active-active GSLB setup, we cannot guarantee that a client session is always sent to the same back-end server in the following use case: a client which was initially served by a back-end server in SDDC01 may be redirected to SDDC02 when the new DNS query is resolved to the SDDC02 VIP after the DNS TTL is expired.

Avi Networks GSLB site cookie persistence is designed to handle the above use case. When the traffic is received by the AVi LB in SDDC02, the Avi LB checks the cookie within the request and finds out that the session is initially connected to SDDC01. So the SDDC02 Avi LB will work as a proxy, which will forward the client’s traffic to the SDDC01 Avi LB. Please note that the source IP of client traffic will be changed to the local load balancing virtual IP (In our case, the source IP will be 192.168.100.100.) by SDDC02 Avi LB before forwarding the traffic to SDDC01. This source NAT is requisite as it will ensure the return traffic from the back-end server will use the same path as the incoming traffic.

Step 1: Add a Federated PKI Profile

Go to Templates—>Security—>PKI Profile and click Create button to create a PKI profile. Input the parameters as below:

  • Name: gslb-pki-server
  • Is Federated: Yes. That is to say that the PKI profile will be replicated across the federation: SDDC01 and SDDC02.
  • Certificate Authority: Add the self-signed certificate which we created in Part 2 as the CA. This will ensure that the AVi load balancer will trust the self-signed certificate presented by the peering SDDC when it works as a proxy for the client.

Step 2: Add a Federated Persistence Profile

Go to Templates—>Profiles—>Persistence and click Create button to create a GSLB persistence profile. Input the parameters as below:

  • Name: gslb-persistence01
  • Type: GSLB Site
  • Application Cookie Name: site-affinity-persistence
  • Is Federated: Yes. That is to say that the persistence profile will be replicated across the federation: SDDC01 and SDDC02.
  • Persistence Timeout: 30 mins

Step 3: Add a Federated Health Monitor

Go to Templates—>Profiles—>Health Monitors and click Create button to create a GSLB health monitor. Input the parameters as below:

  • Name: gslb-hm-https01
  • Type: HTTPS
  • Is Federated: Yes. That is to say that the health monitor will be replicated across the federation: SDDC01 and SDDC02.
  • Health Monitor Port: 443
  • Response Code: 2xx, 3xx

Step 4: Change the GSLB service

Go to Applications—>GSLB Service and edit the exiting GSLB service gslb-vs01.

  • Health Monitor: gslb-hm-https01
  • Site Persistence: Enabled
  • Site Cookie Application Persistence Profile: gslb-persistence01

After we have completed the above configuration, a new pool called SP-gslb-vs01-sddc01-vs01 is added into the local load balancing virtual service: sddc-vs01 on SDDC01 Avi LB.

When we check the member information of the new pool, the virtual IP of local virtual service in the peering SDDC (SDDC02) is shown as the only pool member. Please note that this pool is created by Avi LB automatically so the settings cannot be changed by any users.

Similarly, on SDDC02 Avi LB, a new pool called SP-gslb-vs01-sddc02-vs01 is created and added to the local load balancing virtual service: sddc02-vs01.

Let’s verify our work.

First, the GSLB DNS resolves the DNS name (www.sddc.vmconaws.link) of our testing application to the VIP in SDDC01. When we input the URL https://www.sddc.vmconaws.link into the browser, we are served by centos02 in SDDC01 as expected.

Now change the DNS resolution to point to SDDC02 VIP. Go to our testing application again and we are still served by the same back-end server centos02 in SDDC01.

The cross-site traffic between Avi LBs can be verified via a packet capture on SDDC02 Avi LB. From the packet capture, we can see the HTTPs session which is destined for SDDC02 is now forwarded from SDDC02 to SDDC01 by SDDC02 Avi LB.

As GSLB site cookie is based on HTTP cookie, so there are a few restrictions with it:

  • Site persistence applies only to Avi VIPs.
  • Site persistence across multiple virtual services within the same Controller cluster is not supported.
  • For site persistence to be turned on for a global application, all of its individual members must run on active sites.
  • Site persistence applies only to HTTP or HTTPs traffic when Avi LB terminates TLS/SSL session.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part3

This blog is Part 3 of this series. If you have not gone through  Part1 and Part2, please go and check them out now.

In Part 1 and Part 2, we deployed the Avi Load Balancers and completed the local load balancing setup in VMC SDDC01. To achieve high availability across different SDDCs, global load balancing is required. In this blog, let’s set up an active-active global load balancing service for our testing web application so that the web servers in both two SDDCs can serve the client simultaneously.

Section 1: Infrastructure

Task 1: Follow Part1 and Part2 to deploy Avi load balancer and set up local load balancing in VMC SDDC02 as the above diagram.

  • Avi Controller Cluster
    • Cluster IP: 192.168.101.3
    • Controller Node1: 192.168.101.4
    • Controller Node2: 192.168.101.5
    • Controller Node3: 192.168.101.6
  • SE Engine
    • SE01: 192.168.101.10
    • SE02: 192.168.101.11
  • LB Virtual Service:
    • VIP: 192.168.100.100 with back-end member server Centos03 (192.168.100.25)
  • NAT: 52.26.167.214<->192.168.100.100

Task 2: Connectivity between VMC SDDC and TGW

Please refer to my friend Avnish Tripathi’s blog (https://vmtechie.blog/2019/09/15/connect-aws-transit-gateway-to-vmware-cloud-on-aws/) to connect VMC SDDC01 and VMC SDDC02 to AWS Transit Gateway with Route-based VPN.

Task 3: Set up NAT for DNS Service virtual IP in VMC Console

SDDC01: static NAT 54.201.246.64<->192.168.96.101. Here 192.168.96.101 is the DNS virtual service VIP in SDDC01.

SDDC02: static NAT 52.32.129.180<->192.168.100.101. Here 192.168.100.101 is the DNS virtual service VIP in SDDC02.

Task 4: Add Firewall rules for GSLB

  • Allow inter-SDDC traffic as the below in VMC console.
  • Allow DNS traffic from the Internet to GSLB DNS service virtual IP

Task5: DNS sub-domain delegation.

In the DNS server, delegate sub-domain (sddc.vmconaws.link) to the two DNS virtual service VIPs corresponding public IPs, which means the two to-be-defined DNS virtual services will work as the name servers of sub-domain.

Section 2: Enable GSLB

Task 1: Create a DNS virtual service

In the SDDC01 Avi Controller GUI, go to Application—>Virtual Services—>Create Virtual Service then input the parameters as below:

  • Name: sddc01-g-dns01
  • IPv4 VIP: 192.168.96.101
  • Application Profile: System-DNS
  • TCP/UDP Profile: System-UDP-Per-Pkt
  • Service Port: 53
  • Service Port: 53, override TCP/UDP with System-TCP-Proxy
  • Pool: leave blank

Leave the rest of the settings as default.

In SDDC02, create a similar DNS virtual service (sddc02-g-dns01) with VIP 192.168.100.101.

Task 2: GSLB Site

Avi uses GSLB sites to define different data centers. GSLB sites fall into two broad categories — Avi sites and external sites. This blog focuses on Avi sites. Each Avi site is characterized as either an active or a passive site. Active sites are further classified into two types — GSLB leader and followers. The active site from which the initial GSLB site configuration is performed, is the designated GSLB leader. GSLB configuration changes are permitted only by logging into the leader, which propagates those changes to all accessible followers. The only way to switch the leadership role to a follower is by overriding the configuration of the leader from a follower site. This override can be invoked in the case of site failures or for maintenance.

In our setup, SDDC01 will work as “Leader” site and SDDC02 will work as “Follower” site.

In SDDC01 Avi Controller GUI, go to Infrastructure—>GSLB and click the Edit icon to enable GSLB Service.

In the “New GSLB Configuration” window, input the parameters as below:

  • Name: sddc01-gslb
  • Username: admin
  • Password: Password for Avi Controller
  • GSLB Subdomain: sddc.vmconaws.link

Then click “Save and Set DNS Virtual Service.

Select the newly defined DNS service in task1 as “DNS Virtual Services” and configure “sddc.vmconaws.link” as the subdomain, then save the change.

Now the GSLB setup is as below.

Click “Add New Site” button to add SDDC02 as the second GSLB site. Then Input the parameters below in the “New GSLB Site” window:

  • Name: sddc02-gslb
  • Username: admin
  • Password: Password for Avi Controller
  • IP Address: 192.168.101.3 (SDDC02 Avi Cluster VIP)
  • Port: 443

Click “Save and Set DNS Virtual Services” button. Then select the newly defined DNS service in task1 as “DNS Virtual Services” and configure “sddc.vmconaws.link” as the subdomain, then save the change.

Now the GSLB Site configuration is completed. We can see that “sddc01-gslb” works as the “Leader” site and “sddc02-gslb” works as the “Active” site.

Typically, the VIP configured in a local virtual service (configured as a GSLB pool member) is a private IP address. In our configurations, the VIPs are 192.168.x.x. But these IP addresses are not reachable by the Internet client. To handle this, we have to enable NAT-aware Public-Private GSLB feature. Go to Infrastructure—>GSLB—>Active Members—>sddc01-gslb then click the Edit icon. In the advanced settings, input the following parameters:

  • Type: Private
  • IP Address:
    • 10.0.0.0/8
    • 172.16.0.0/15
    • 192.168.0.0/16

Section 3: Create a GSLB Service

We are ready to create a GSLB service for our application (www.sddc.vmconaws.link) now. To achieve active-active GSLB service and distribute the load evenly across 3 backend servers (2 in SDDC01 and 1 in SDDC02). We developed the following GSLB service design:

  • The GSLB service includes 1 GSLB pool.
  • There is one GSLB pool member in each SDDC.
  • Groups Load Balancing Algorithm: Priority-based
  • Pool Members Load Balancing Algorithm: Round Robin

Go to Application—>GSLB Services—>Create, click the “Advanced Setup” button. In the “New GSLB Service” input the following parameters:

  • Name: gslb-vs01
  • Application Name:www
  • Sudomain: .sddc.vmconaws.link
  • Health Monitor: System-GSLB-HTTPS
  • Group Load Balancing Algorithm: Priority-based
  • Health Monitor Scope: All Members
  • Controller Health Status: Enabled

Then click “Add Pool” and input the following parameters:

  • Name: gslb-vs01-pool
  • Pool Members Load Balancing Algorithm: Round Robin (Note this means the client will be sent to the local load balancer in both SDDC01 and SDDC02).
  • Pool Member (SDDC01):
    • Site Cluster Controller: sddc01-gslb
    • Virtual Service: sddc01-vs01
    • Public IP: 34.216.94.228
    • Ratio: 2 (The virtual service will receive 2/3 load.)
  • Pool Member (SDDC02):
    • Site Cluster Controller: sddc02-gslb
    • Virtual Service: sddc02-vs01
    • Public IP: 52.26.167.214
    • Ratio: 1 (The virtual service will receive 1/3 load.)

We will change the following parameters as well for this GSLB service.

Now we have completed the setup of active-active GSLB for our web service.

Let’s verify our work.

  • The GLSB DNS service will respond to the DNS query for DNS name http://www.sddc.vmconaws.link with the public IP of SDDC01 web virtual services or the public IP of SDDC02 web virtual service via the round-robin algorithm.
  • The web servers in both SDDCs serve the clients simultaneously.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part2

This blog is Part 2 of this series. If you have not gone through the Part1, please go and check it out now.

In Part 2, we will demo how to set up a local load balancing virtual service for a web-based application on our deployed Avi load balancer. The IP Address allocation and network connectivity are shown below.

There are hundreds of features are available when setting up a local load balancing service in Avi load balancer. In this blog, we will focus on the widely used features in enterprise load balancing solution:

  • TLS/SSL Termination
  • Session Persistence
  • Health Monitor

Section 1: TSL/SSL Termination

The following deployment architectures are supported by Avi Load balancer (LB) for SSL:

  • None: SSL traffic is handled as pass-through (layer 4), flowing through Avi LB without terminating the encrypted traffic.
  • Client-side: Traffic from the client to Avi LB is encrypted, with unencrypted HTTP to the back-end servers.
  • Server-side: Traffic from the client to Avi LB is unencrypted HTTP, with encrypted HTTPS to the back-end servers.
  • Both: Traffic from the client to Avi LB is encrypted and terminated at Avi LB, which then re-encrypts traffic to the back-end server.
  • Intercept: Terminate client SSL traffic, send it unencrypted over the wire for taps to intercept, then encrypt to the destination server.

We will use Client-side deployment architecture here.

Step 1: Get or Generate a certificate

Please note that a CA signed certificate is highly recommended for any production system. We will use a self-signed certificate here for simplification. Go to Templates—>Security—SSL/TLS Certificate, which all installed certificates are listed. A self-signed certificate is shown, its subject name is http://www.sddc.vmconaws.link.

Step 2: Create a customized SSL/TLS profile

The system default SSL/TLS profile still includes the support for TLS 1.0, which is not considered as very secure protocol anymore. So, we will go to Templates—>Security—>SSL/TLS Profile to create a new SSL/TLS profile which excludes TLS 1.0 as below:

Section 2: Session Persistence

Cookie persistence is the most-common persistence mechanism for a web-based application. Here we will define a persistence profile for our testing web application. Go to Templates—>Profiles—>Persistence and click “Create” button, then input the parameters as below:

  • Name: sddc011-vs01-pp01
  • Type: HTTP Cookie
  • HTTP Cookie Name: vmconaws-demo
  • Persistence Timeout: 30mins

Please note that the cookie payload contains the back-end server IP address and port, which is encrypted with AES-256. 

Section 3: Health Monitor

Avi load balancer uses the health monitor to check if the back-end servers in the load balancing pool are healthy to provide the required service or not. There are two kinds of health monitors:

  • Active Health Monitor: Active health monitors send proactive queries to servers, synthetically mimicking a client. Send and receive timeout intervals may be defined, which statically determine the server response as successful or failed.
  • Passive Health Monitor: While active health monitors provide a binary good/bad analysis of server health, passive health monitors provide a more subtle check by attempting to understand and react to the client-to-server interaction. For example, if a server is quickly responding with valid responses (such as HTTP 200), then all is well; however, if the server is sending back errors (such as TCP resets or HTTP 5xx errors), the server is assumed to have errors. 

Only active health monitors may be edited. The passive monitor has no settings.

Note: Best practice is to enable both a passive and an active health monitor to each pool.

Let’s start to create an active health monitor for our application. Go to Templates—>Profiles—>Health Monitors and click “Create” button, then input the parameters as below:

  • Name: sddc01-vs01-hm01
  • Server Response Data: sddc01
  • Server Response Code: 2xx
  • Health Monitor Port: 80 (Please note that we don’t change the default setting here. But this option can be very useful for some cluster-based application)

Section 4: Create a Load Balancing Pool

Now it is time to create the load balancing pool. Go to Application—>Pools and click “Create Pool”. In the Step 1 window, input the parameters as below:

  • Load Balance: Least Connections
  • Persistence: sddc-vs01-pp01

Add an active health monitor: sddc01-vs01-hm01.

Add two member servers:

  • centos01: 192.168.96.25
  • centos02: 192.168.96.26

Section 5: Create a Virtual Service

We will use the “Advanced Setup” to create a virtual service for our web application.

In “Step 1: Setting” window, input the parameters as below:

We use the system pre-defined application profile “System-HTTP” as the applied Application Profile for simplification here. The “System-HTTP” profile includes comprehensive configuration options for a web application, which possibly requests a separated blog to cover. Let’s list a few here:

  • X-Forwarded-For: Avi SE will insert an X-Forwarded-For (XFF) header into the HTTP request headers when the request is passed to the server. This feature is enabled.
  • Preserve Client IP Address: Avi SE will use the client-IP rather than SNAT IP for load-balanced connections from the SE to back-end application servers. This feature is disabled.
  • HTTP-to-HTTPS Redirect: Client requests received via HTTP will be redirected to HTTPS. This feature is disabled.

Leave all settings as default for Step 2 and 3.

In “Step 4: Advanced”, input the parameters as below:

  • Use VIP as SNAT: enabled
  • SE Group: Default-Group

Section 5: VMC Setup

To enable user’s access to our testing web, two changes are required in the VMC SDDC.

  • Network Address Translation
  • A CGW firewall rule to allow traffic from the Internet to the LB VIP (192.168.96.100) on HTTPs

PIV

So far, we have completed all load balancing configurations. Let’s go to verify our work.

Application web page (https://www.sddc.vmconaws.link):

Session Persistence Cookie:

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part1

When we design a highly available (HA) infrastructure for a mission-critical application, local load balancing and global load balancing are always the essential components of the solution. This series of blogs will demonstrate how to build an enterprise-level local load balancing and global load balancing service in VMC on AWS SDDC with Avi Networks load balancer.

This series of blogs will cover the following topics:

  1. How to deploy Avi load balancer in a VMC SDDC;
  2. How to set up local load balancing service to achieve HA within a VMC SDDC (https://davidwzhang.com/2019/09/21/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part2/)
  3. How to set up global load balancing service to achieve HA across different SDDCs which are in different AWS Availability Zones (https://davidwzhang.com/2019/09/30/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part3/)
  4. How to set up global load balancing site affinity (https://davidwzhang.com/2019/10/08/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part4/)
  5. How to automate Avi LB with Ansible (https://davidwzhang.com/2019/10/14/automate-avi-lb-service-with-ansible/)

By the end of this series, we will complete an HA infrastructure build as the following diagram: this design leverages local load balancing service and global load balancing service to provide 99.99%+ SLA to a web-based mission-critical application.

The Avi load balancer platform is built on software-defined architectural principles which separate the data plane and control plane. The product components include:

  • Avi Controller (control plane) The Avi Controller stores and manages all policies related to services and management. HA of the Avi Controller requires 3 separate Controller instances, configured as a 3-node cluster
  • Avi Service Engines (data plane) Each Avi Service Engine runs on its own virtual machine. The Avi SEs provide the application delivery services to end-user traffic, and also collect real-time end-to-end metrics for traffic between end-users and applications.

In Part 1, we will cover the deployment of Avi load balancer. The diagram below shows the controller and service engine (SE) network connectivity and IP address allocation.

Depending on the level of vCenter access provided, Avi load balancer supports 3 modes of deployment. In VMC on AWS, only the “no-access” mode is supported. Please refer to https://avinetworks.com/docs/ for more information about Avi load balancer deployment modes in VMWare Cloud.

Section 1: Controller Cluster

Let’s start to deploy the Avi controllers and set up the controller cluster. First, download the ova package for the controller appliance. In this demo, the version of Avi load balancer controller is v18.2.5. After the download, deploy the controller virtual appliance via “Deploying OVF Template” wizard in VMC SDDC vCenter. In the “Customize template” window, input parameters as below:

  • Management interface IP: 192.168.80.5
  • Management interface Subnet mask: 255.255.255.0
  • Default gateway: 192.168.80.1
  • Sysadmin login authentication key: Password

After this 1st controller appliance is deployed and powered on, it is ready to start the controller initial configuration. Go to the controller management GUI https://192.168.80.4

(1) Username/Password

(2) DNS and NTP

(3) SMTP

(4) Multiple-Tenants? Select No here for simplification.

The initial configuration for the 1st controller is completed. As the first controller of the cluster, it will receive the “Leader” role. The second and third controller will work as “Follower”. When we are logged in the GUI of this first controller, go to Administration—>Controller, as shown below.

Similarly, go to deploy and perform the initial configuration for the 2nd (192.168.80.5) and 3rd controller (192.168.80.6).

In the management GUI of the 1st controller, go to Administration—>Controller and click “Edit”. In “Edit Controller Configuration” window, add the second node and third node into the cluster as below.

After a few minutes, the cluster is set up successfully.

Section 2: Service Engine

Now it is ready to deploy SE virtual appliances. In this demo, two SEs will be deployed. These 2 SEs are added into the default Sevice Engine Group with the default HA mode (N+M).

Step 1: Create and download the SE image.

Go to Infrastructure—>Clouds, click the download icon and select the ova format. Please note that this SE ova package is only for the linked controller cluster. It can not be used for another controller cluster.

Step 2: Get the cluster UUID and authentication token for SE deployment.

Step 3: In SDDC vCenter, run the “Deploy OVF Template” wizard to import SE ova package. In the “Customize template” window, the input parameters:

  • IP Address of the Avi Controller: 192.168.80.3 (cluster IP of the controller)
  • Authentication token for Avi Controller: as Step2
  • Controller Cluster UUID for Avi Controller: as Step 2
  • Management Interface IP Address: 192.168.80.10
  • Management Interface Subnet Mask: 255.255.255.0
  • Default Gateway: 192.168.80.1
  • DNS Information: 10.1.1.151
  • Sysadmin login authentication key: Password

Please note that the second vNIC will be used as the SE data interface.

Then continue to deploy the second SE (mgmt IP: 192.168.80.11/24).

The deployed SEs will register themself into the controller cluster as below.

Step 4: Now the SEs have established the control and management plane communication with the controller cluster. It is time to set up the SE’s data network.

During the setup, I found that the vNIC for virtual appliance VM and SE Ethernet Interface is not properly mapped, for example, the data interface is the 2nd vNIC of SE VM in vCenter but it is shown as Ethernet 5 in SE network setup. To get the correct mapping, the mac address of data vNIC will be leveraged. Go to SDDC vCenter and get the MAC address of SE data interface.

In the controller management GUI, go to Infrastructure—>Service Engine and edit the selected SE. In the interface list, select the correct interface which has the same mac address then provide the IP address and subnet mask.

The final step is to add a gateway for this data interface. Go to Infrastructure—>Routing—>Static Route and create a new static default route.

Tip: VM-VM anti-affinity policy is highly recommended to enhance the HA of the controller and service engine virtual appliances.

This is the end of the blog. Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Authentication with Okta IdP

The Federated Identity feature of VMware Cloud on AWS can be integraed with all 3rd party IdPs who support SAML version 2.0. In this integration model, the customer dedicated vIDM tenant will work as SAML Service Provider. If the 3rd party IdP is set up to perform multi-factor authentication (MFA), the customer will be prompted MFA for access to VMware Cloud services. In this blog, the integration with one of the most popular IdP Okta will be demoed.

Disclaimer:

The Okta IdP setting in this blog is to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirement.

Note: please complete the first part of Intergation as per my first blog (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/) of this series before moving forward.

To add the same users and user groups in Okta IdP as the configured vIDM tenant, we need to integrate Okta with corporate Active Directory (AD). The integration is via Okta’s lightweight agent.

Click the “Directory Integration” in Okta UI.

Click “Add Active Directory”.

The Active Directory integration setup wizard will start and click “Set Up Active Directory”.

Download the agent as required in the below window.

This agent can be installed on a Windows Server 2008 R2 or later. The installation of this Okta agent is quite straightforward. Once the agent installation is completed, you need to perform the setup of this AD integration. In the basic setting window, select the Organizational Units (OUs) that you’d like to sync users or groups from and make sure that “Okta username format” is set up to use User Principle Name (UPN).

In the “Build User Profile” window, select any custom schema which needs to be included in the Okta user profile and click Next.

Click Done to finish the integration setup.

The Okta directory setting window will pop up.

Enable the Just-In-Time provisioning and set the Schedule Import to perform user import every hour. Review and save the setting.

Now go to the Import tab and click “Import Now” to import the users from corporate AD.

As it is the first time to import user/users from customer AD, select “Full Import” and click Import.

When the scan is finished, Okta will report the result. Click OK.

Select the user/users to be imported and confirm the user assignment. Note: the user jsmith@lab.local is imported here, who will be used for the final integration testing.

Now it is time to set up the SAML IdP in Okta.

Go to Okta Classic UI application tab and click “Add Application”

Click “Create New App”;

Select Web as the Platform and “SAML 2.0” for Sign on method and click Create;

Type in App name, “csp-vidm” is used as an example as the app name and click Next;

There are two configuration items in the popped up “Create SAML Integration” window which is mandatory. These information can be copied from Identity Provider setting within vIDM tenant.

Go to vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.

Type in the “Identity Provider Name”, here the example name is “Okta01”

Go to the bottom of this IdP creation window and click “Service Provider (SP) Metadata”.

A new window will pop up as the below:

The entity ID and HTTP-POST location are required information for Okta IdP SAML setting. Copy the entity ID URL link into the “Audience URI (SP Entity ID) and HTTP-POST location into “Single sign on URL” in the Okta “Create SAML Integration” window.

Leave all other configuration items as the default and click Next;

In the Feedback window, suggest the newly created app is an internal app and click Finish.

A “Sign On settings” window will pop up as below, click “Identity Provider metadata” link.

The XML file format of Identity Provider metadata shows up. Select all content of this XML file and copy.

Paste the Okta IdP metadata into SAML Metadata and click “Process IdP Metadata” in the vIDM 3rd party identity provider creation window.

The “SAML AuthN Request Binding” and “Name ID format mapping from SAML Response” will be updated as below:

Select “lab.local” directory as users who can authenticate with this new 3rd party IdP and leave the Network as default “All RANGES”. Then create a new authentication method called “Okta Auth” with SAML Context “urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtected“. Please note that the name of this newly created authentication method has to be different from any existing authentication method.

Then leave all other configuration items’ box unchecked and click Add.

The 3rd party IdP has been successfully added now.

The last step of vIDM set up for this Okta integration is updating the default access policy to use the newly defined authentication method “Okta Auth”. Please follow up the steps in my previous blog (https://wordpress.com/block-editor/post/davidwzhang.com/308) to perform the required update. The updated default access policy should be similar as below.

Before going to test the setup, go to Okta UI to assign user/s to the newly defined SAML 2.0 web application “csp-vidm”. Click Assignment.

Click Assign and select “Assign to People”.

In the “Assign csp-vidm to People” window, assign user John Smith (jsmith@lab.local), which means that the user John Smith is allowed by this SAML 2.0 application.

After the assignment is completed, user John Smith is under the assignment of this SAML 2.0 application “csp-vidm”.

Instead of assigning individual users, AD group/groups can be assigned to the SAML application as well.

Finally, everything is ready to test the integration.

Open a new Incognito window in a Chrome browser and type in the vIDM tenant URL then click Enter.

In the log in window, type user name jsmith@lab.local and click Next.

The authentication session is redirected to Okta.

Type in Username & Password and click “Sign In”.

Then John Smith (jsmith@lab.local) successfully logs in the vIDM tenant.

This is the end of this demo. Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Authentication with Active Directory

This blog is the second blog of this Federated Identity Management for VMC on AWS series. Please complete the vIDM connector installation and setup as per my first blog of this series before moving forward. (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/)

VMware Cloud on AWS Federated Identity management supports different kinds of authentication methods. This blog will demo the basic method: authentication with the customer corporate Active Directory (AD).

When VMC on AWS customers use AD for authentication, outbound-only connection mode is highly recommended. This mode does not require any inbound firewall port to be opened: only outbound connectivity from vIDM Connector to VMware SaaS vIDM tenant on port 443 is required. All user and group sync from your enterprise directory and user authentication are handled by the vIDM connector.

To enable outbound-only mode, go to update the settings of the Build-in Identity Provider. In the user section of Built-in Identity Provider settings, select the newly created directory “lab.local” and add the newly created connector “vidmcon01.​lab.​local”.

After the connetor is added successfully, select Password (cloud deployment) in the “Connector Authentication Methods” and click Save.

Now it is time to update the access policy to use corporate Active Directory to authenticate VMC users.

Go to Identity & Access Management.

Click “Edit DEFAULT POLICY” then the “Edit Policy” window pop up. Click Next.

Click “ADD POLICY RULE”.

Then the “Add Policy Rule” window will pop up. At this stage, just leave the first two configuration items as default: “ALL RANGES” and “ALL Device Types”. In the “and user belong to group(s)” config item, search and add all 3 synced groups (sddc-admins, sddc-operators and sddc-readonly) to allow the users in these 3 groups to log in.

Add Password(cloud deployment) as authentication method.

Use Password(Local Directory) as fallback authentication method and click Save.

There are 3 rules defined in the default access policy. Drag the newly defined rule to the top of the rules table, which will make sure that the new rule is evaluated first when a user tries to log in.

Now the rules table shows as below. Click Next.

Click Save to keep the changes of the default access policy.

You are now good to test your authentication set up. Open a new Incognito window in your Chrome browser and connect to the vIDM URL. Type in the username (jsmith@lab.local) and click Next.

Type in the Active Directory password for user jsmith@lab.local and click “Sign in”.

Then you can see that jsmith@lab.local has successfully logged in the vIDM!

Thank you very much for reading!