Automate Avi LB Service with Ansible

Avi Networks load balancing platform offers fantastic automation capabilities, which allow us to automate the load balancing service via some popular Infrastructure as Code tools like Ansible and Terraform. Today, I will demonstrate the Day 1 automation using Ansible (version 2.8.5) in this blog.

[root@code1 ~]# ansible --version
ansible 2.8.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

The link below lists all available Ansible modules for Avi LB automation:

https://docs.ansible.com/ansible/latest/modules/list_of_network_modules.html#avi

Avi have developed a role called avisdk to package all Avi Ansible modules, which eases our lives further. To install this Avi avisdk Ansible role, just run the CLI below:

pip install avisdk

In this blog, we will automate the local load balancing configuration which we configured manually in my other blog: Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part2.

In summary, we are going to:

  • Create an HTTP Health Monitor (sddc01-vs02-hm01).
  • Create an Application Persistence profile (sddc01-vs02-persistence01) based on HTTP cookie.
  • Create a local load balancing pool (sddc01-vs02-pool01) with 2 pool members. The pool member health status will be checked with the newly created health monitor and the session persistence associated with this pool is based on the cookie persistence profile which we defined in the previous step.
  • Create an HTTP application profile (sddc01-vs02-profile-http01) which enables the compression and HTTP to HTTPs redirect.
  • Create an SSL profile (sddc01-vs02-profile-ssl01) which only allows securer ciphers and TLS 1.1 and 1.2.
  • Create a local load balancing service which leverages the newly created HTTP application profile and SSL profile to distribute the traffic to the load balancing pool sddc01-vs02-pool01.

The completed Ansible playbook is as below:

---
- hosts: localhost
  connection: local
  vars:
    controller: 192.168.80.3
    username: admin
    password: Password
    api_version: 18.2.5
    vs_name: sddc01-vs02
    vs_vip: 192.168.96.110
    vs_serviceport01: 443
    vs_serviceport02: 80
    pool_name: sddc01-vs02-pool01
    pool_member01: 192.168.96.25
    pool_member01_hostname: centos01
    pool_member02: 192.168.96.26
    pool_member02_hostname: centos02
    httpprofile_name: sddc01-vs02-profile-http01
    healthmonitor_name: sddc01-vs02-hm01
    cookie_name: sddc01-vs02-cookie01
    persistence_name: sddc01-vs02-persistence01
    certificate_name: www.sddc.vmconaws.link
    sslprofile_name: sddc01-vs02-profile-ssl01
  roles:
    - avinetworks.avisdk
  tasks:
    - name: Create HTTP Health Monitor
      avi_healthmonitor:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{api_version}}"
        state: present
        name: "{{ healthmonitor_name }}"
        http_monitor:
          http_request: 'HEAD / HTTP/1.0'
          http_response_code:
            - HTTP_2XX
            - HTTP_3XX
        receive_timeout: 4
        failed_checks: 3
        send_interval: 10
        successful_checks: 3
        is_federated: false
        type: HEALTH_MONITOR_HTTP
    - name: Create an Application Persistence setting using http cookie
      avi_applicationpersistenceprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{api_version}}"
        http_cookie_persistence_profile:
          always_send_cookie: false
          cookie_name: "{{ cookie_name }}"
          timeout: 15
        name: "{{ persistence_name }}"
        persistence_type: PERSISTENCE_TYPE_HTTP_COOKIE
        server_hm_down_recovery: HM_DOWN_PICK_NEW_SERVER
    - name: Create local load balancing pool
      avi_pool:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ pool_name }}"
        state: present
        application_persistence_profile_ref: '/api/applicationpersistenceprofile?name={{ persistence_name }}'
        health_monitor_refs:
          - '/api/healthmonitor?name={{ healthmonitor_name }}'
        lb_algorithm: LB_ALGORITHM_LEAST_CONNECTIONS
        servers:
          - ip:
              addr: "{{ pool_member01 }}"
              type: V4
            hostname: "{{ pool_member01_hostname }}"
          - ip:
              addr: "{{ pool_member02 }}"
              type: V4
            hostname: "{{ pool_member02_hostname }}"
    - name: Create an HTTP application profile
      avi_applicationprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        state: present
        http_profile:
          compression_profile:
            compressible_content_ref: '/api/stringgroup?name=System-Compressible-Content-Types'
            compression: true 
            remove_accept_encoding_header: true 
            type: AUTO_COMPRESSION
          connection_multiplexing_enabled: true 
          disable_keepalive_posts_msie6: true 
          disable_sni_hostname_check: false 
          enable_fire_and_forget: false 
          enable_request_body_buffering: false 
          enable_request_body_metrics: false 
          fwd_close_hdr_for_bound_connections: true 
          hsts_enabled: false 
          hsts_max_age: 365 
          hsts_subdomains_enabled: true 
          http2_enabled: false 
          http_to_https: true 
          httponly_enabled: false 
          keepalive_header: false 
          keepalive_timeout: 40000 
          max_bad_rps_cip: 0 
          max_bad_rps_cip_uri: 0 
          max_bad_rps_uri: 0 
          max_keepalive_requests: 100 
          max_response_headers_size: 48 
          max_rps_cip: 0 
          max_rps_cip_uri: 0 
          max_rps_unknown_cip: 0 
          max_rps_unknown_uri: 0 
          max_rps_uri: 0 
          post_accept_timeout: 30000 
          respond_with_100_continue: true 
          secure_cookie_enabled: false 
          server_side_redirect_to_https: false 
          spdy_enabled: false 
          spdy_fwd_proxy_mode: false 
          ssl_client_certificate_mode: SSL_CLIENT_CERTIFICATE_NONE 
          ssl_everywhere_enabled: false 
          use_app_keepalive_timeout: false 
          websockets_enabled: true 
          x_forwarded_proto_enabled: false 
          xff_alternate_name: X-Forwarded-For 
          xff_enabled: true
        name: "{{ httpprofile_name }}"
        type: APPLICATION_PROFILE_TYPE_HTTP
    - name: Create SSL profile with list of allow ciphers and TLS version
      avi_sslprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ sslprofile_name }}"
        accepted_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA"
        accepted_versions:
          - type: SSL_VERSION_TLS1_1
          - type: SSL_VERSION_TLS1_2
        send_close_notify: true
        ssl_rating:
          compatibility_rating: SSL_SCORE_EXCELLENT
          performance_rating: SSL_SCORE_EXCELLENT
          security_score: '100.0'
    - name: Create a virtual service
      avi_virtualservice:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ vs_name }}"
        state: present
        performance_limits:
          max_concurrent_connections: 1000
        ssl_profile_ref: '/api/sslprofile?name={{ sslprofile_name }}'
        application_profile_ref: '/api/applicationprofile?name={{ httpprofile_name }}'
        ssl_key_and_certificate_refs:
          - '/api/sslkeyandcertificate?name={{ certificate_name }}'
        vip:
          - ip_address:
              addr: "{{ vs_vip }}"
              type: V4
            vip_id: 1
        services:
          - port: "{{ vs_serviceport01 }}"
            enable_ssl: true
          - port: "{{ vs_serviceport02 }}"
        pool_ref: '/api/pool?name={{ pool_name }}'

When we run the playbook, we can get all configurations completed in just 30 seconds which normally requires at least half hour.

Health Monitor:

Session Persistence:

Load Balancing Pool:

HTTP Application Profile:

SSL Profile:

Virtual Service:

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part1

When we design a highly available (HA) infrastructure for a mission-critical application, local load balancing and global load balancing are always the essential components of the solution. This series of blogs will demonstrate how to build an enterprise-level local load balancing and global load balancing service in VMC on AWS SDDC with Avi Networks load balancer.

This series of blogs will cover the following topics:

  1. How to deploy Avi load balancer in a VMC SDDC;
  2. How to set up local load balancing service to achieve HA within a VMC SDDC (https://davidwzhang.com/2019/09/21/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part2/)
  3. How to set up global load balancing service to achieve HA across different SDDCs which are in different AWS Availability Zones (https://davidwzhang.com/2019/09/30/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part3/)
  4. How to set up global load balancing site affinity (https://davidwzhang.com/2019/10/08/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part4/)
  5. How to automate Avi LB with Ansible (https://davidwzhang.com/2019/10/14/automate-avi-lb-service-with-ansible/)

By the end of this series, we will complete an HA infrastructure build as the following diagram: this design leverages local load balancing service and global load balancing service to provide 99.99%+ SLA to a web-based mission-critical application.

The Avi load balancer platform is built on software-defined architectural principles which separate the data plane and control plane. The product components include:

  • Avi Controller (control plane) The Avi Controller stores and manages all policies related to services and management. HA of the Avi Controller requires 3 separate Controller instances, configured as a 3-node cluster
  • Avi Service Engines (data plane) Each Avi Service Engine runs on its own virtual machine. The Avi SEs provide the application delivery services to end-user traffic, and also collect real-time end-to-end metrics for traffic between end-users and applications.

In Part 1, we will cover the deployment of Avi load balancer. The diagram below shows the controller and service engine (SE) network connectivity and IP address allocation.

Depending on the level of vCenter access provided, Avi load balancer supports 3 modes of deployment. In VMC on AWS, only the “no-access” mode is supported. Please refer to https://avinetworks.com/docs/ for more information about Avi load balancer deployment modes in VMWare Cloud.

Section 1: Controller Cluster

Let’s start to deploy the Avi controllers and set up the controller cluster. First, download the ova package for the controller appliance. In this demo, the version of Avi load balancer controller is v18.2.5. After the download, deploy the controller virtual appliance via “Deploying OVF Template” wizard in VMC SDDC vCenter. In the “Customize template” window, input parameters as below:

  • Management interface IP: 192.168.80.4
  • Management interface Subnet mask: 255.255.255.0
  • Default gateway: 192.168.80.1
  • Sysadmin login authentication key: Password

After this 1st controller appliance is deployed and powered on, it is ready to start the controller initial configuration. Go to the controller management GUI https://192.168.80.4

(1) Username/Password

(2) DNS and NTP

(3) SMTP

(4) Multiple-Tenants? Select No here for simplification.

The initial configuration for the 1st controller is completed. As the first controller of the cluster, it will receive the “Leader” role. The second and third controller will work as “Follower”. When we are logged in the GUI of this first controller, go to Administration—>Controller, as shown below.

Similarly, go to deploy and perform the initial configuration for the 2nd (192.168.80.5) and 3rd controller (192.168.80.6).

In the management GUI of the 1st controller, go to Administration—>Controller and click “Edit”. In “Edit Controller Configuration” window, add the second node and third node into the cluster as below.

After a few minutes, the cluster is set up successfully.

Section 2: Service Engine

Now it is ready to deploy SE virtual appliances. In this demo, two SEs will be deployed. These 2 SEs are added into the default Sevice Engine Group with the default HA mode (N+M).

Step 1: Create and download the SE image.

Go to Infrastructure—>Clouds, click the download icon and select the ova format. Please note that this SE ova package is only for the linked controller cluster. It can not be used for another controller cluster.

Step 2: Get the cluster UUID and authentication token for SE deployment.

Step 3: In SDDC vCenter, run the “Deploy OVF Template” wizard to import SE ova package. In the “Customize template” window, the input parameters:

  • IP Address of the Avi Controller: 192.168.80.3 (cluster IP of the controller)
  • Authentication token for Avi Controller: as Step2
  • Controller Cluster UUID for Avi Controller: as Step 2
  • Management Interface IP Address: 192.168.80.10
  • Management Interface Subnet Mask: 255.255.255.0
  • Default Gateway: 192.168.80.1
  • DNS Information: 10.1.1.151
  • Sysadmin login authentication key: Password

Please note that the second vNIC will be used as the SE data interface.

Then continue to deploy the second SE (mgmt IP: 192.168.80.11/24).

The deployed SEs will register themself into the controller cluster as below.

Step 4: Now the SEs have established the control and management plane communication with the controller cluster. It is time to set up the SE’s data network.

During the setup, I found that the vNIC for virtual appliance VM and SE Ethernet Interface is not properly mapped, for example, the data interface is the 2nd vNIC of SE VM in vCenter but it is shown as Ethernet 5 in SE network setup. To get the correct mapping, the mac address of data vNIC will be leveraged. Go to SDDC vCenter and get the MAC address of SE data interface.

In the controller management GUI, go to Infrastructure—>Service Engine and edit the selected SE. In the interface list, select the correct interface which has the same mac address then provide the IP address and subnet mask.

The final step is to add a gateway for this data interface. Go to Infrastructure—>Routing—>Static Route and create a new static default route.

Tip: VM-VM anti-affinity policy is highly recommended to enhance the HA of the controller and service engine virtual appliances.

This is the end of the blog. Thank you very much for reading!