Zero Code NSX Advanced LB Automation with Terraform

VMware NSX Advanced Load Balancer (Avi Networks) provides multi-cloud load balancing, web application firewall, application analytics and container ingress services across on-premises data centers and any cloud.

Terraform is a widely adopted Infrastructure as Code tool that allows you to define your infrastructure using a simple, declarative programming language, and deploy and manage infrastructure across public cloud providers including AWS, Azure and Google Cloud. NSX Advanced load balancer (Aka Avi load balancer) are fully supported by Terraform and each Avi REST resource is exposed as a resource in Terraform. By using the Terraform Avi Provider, we can achieve Infrastructure as a code for your load balancing service.

In this blog, I will show you how easy it is to build an LBaaS service (local load balancing and global load balancing across two DCs) for a critical (99.99%+ SLA)web application on NSX advanced load balancer via Terraform in minutes.

My testing environment is set up as below:

  • Two DCs: site01 and site02;
  • There is a controller cluster in each site;
  • Two GSLB sites configured: site01 is the leader site.
  • Terraform v0.12
  • NSX Advanced load balancer: v18.2.9

The Terraform plan will create the following resources:

  • 5 web servers as a pool member in each DC;
  • Two local load balancing pools in each DC: the first 2 web servers are members of pool1 and the rest 3 web servers are members of pool2;
  • A pool group in each DC, which includes the above 2 pools: pool1 is In Service and pool2 is Out of Service
  • A virtual service in each DC to provide local load balancing
  • SSL profile in each DC to define how a SSL session is terminated on the NSX advanced load balancer;
  • HTTP Cookie-based persistence profile in each DC to offer web session persistence in the local load balancing;
  • Certificate and Key for the web application HTTPS service;
  • A HTTP health monitor in each DC to check the health of local load balancing pool members
  • Global load balancing PKI profile;
  • Global load balancing health monitor;
  • Global load balancing persistence profile;
  • Global load balancing service;

Also, a few outputs are defined to suggest the results of the Terraform plan.

You can access main.tf and variables.tf on GitHub here.

# For example, restrict template version in 0.1.x
provider "avi" {
  avi_username = "admin"
  avi_tenant = "admin"
  avi_password = "password"
  avi_controller= var.site1controller
}

provider "avi" {
  avi_username = "admin"
  avi_tenant = "admin"
  alias = "site02"
  avi_password = "password"
  avi_controller= var.site2controller
}

data "avi_tenant" "default_tenant" {
  name = "admin"
}

data "avi_cloud" "default_cloud" {
  name = "Default-Cloud"
}

data "avi_tenant" "site02_default_tenant" {
  provider = avi.site02
  name = "admin"
}

data "avi_cloud" "site02_default_cloud" {
  provider = avi.site02
  name = "Default-Cloud"
}

data "avi_serviceenginegroup" "se_group" {
  name      = "Default-Group"
  cloud_ref = data.avi_cloud.default_cloud.id
}

data "avi_gslb" "gslb_demo" {
  name = "Default"
}

data "avi_virtualservice" "site01_vs01" {
  name = "gslb_site01_vs01"
}

data "avi_virtualservice" "site02_vs01" {
  name = "gslb_site02_vs01"
}

data "avi_applicationprofile" "site01_system_https_profile" {
  name = "System-Secure-HTTP"
}

data "avi_applicationprofile" "site02_system_https_profile" {
  provider = avi.site02
  name = "System-Secure-HTTP"
}

### Start of Site01 setup
resource "avi_sslprofile" "site01_sslprofile" {
    name = "site01_sslprofile"
    ssl_session_timeout = 86400
    tenant_ref = data.avi_tenant.default_tenant.id
    accepted_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA"
    prefer_client_cipher_ordering = false
    enable_ssl_session_reuse = true
    accepted_versions {
      type = "SSL_VERSION_TLS1_1"
    }
    accepted_versions {
      type = "SSL_VERSION_TLS1_2"
    }
    cipher_enums = [
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"]
    send_close_notify = true
    type = "SSL_PROFILE_TYPE_APPLICATION"
    enable_early_data = false
    ssl_rating {
      compatibility_rating = "SSL_SCORE_EXCELLENT"
      security_score = 100.0
      performance_rating = "SSL_SCORE_EXCELLENT"
    }
  }

resource "avi_applicationpersistenceprofile" "site01_applicationpersistenceprofile" {
  name  = "site01_app-pers-profile"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = false
  persistence_type = "PERSISTENCE_TYPE_HTTP_COOKIE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_vsvip" "site01_vs01_vip" {
  name = "site01_vs01_vip"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  vip {
    vip_id = "0"
    ip_address {
      type = "V4"
      addr = var.gslb_site01_vs01_vip
    }
  }
}

resource "avi_sslkeyandcertificate" "site01_cert1000" {
    name = "site01_cert1000"
    tenant_ref = data.avi_tenant.default_tenant.id
    certificate {
        certificate = file("${path.module}/www.sddc.vmconaws.link.crt")
        }
    key = file("${path.module}/www.sddc.vmconaws.link.key")
    type= "SSL_CERTIFICATE_TYPE_VIRTUALSERVICE"
}

resource "avi_virtualservice" "gslb_site01_vs01" {
  name = "gslb_site01_vs01"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  pool_group_ref = avi_poolgroup.site01_pg-1.id
  vsvip_ref  = avi_vsvip.site01_vs01_vip.id
  application_profile_ref = data.avi_applicationprofile.site01_system_https_profile.id
  services {
        port = 443
        enable_ssl = true
        port_range_end = 443
        }
  cloud_type                   = "CLOUD_VCENTER"
  ssl_key_and_certificate_refs = [avi_sslkeyandcertificate.site01_cert1000.id]
  ssl_profile_ref = avi_sslprofile.site01_sslprofile.id
}

resource "avi_healthmonitor" "site01_hm_1" {
  name = "site01_monitor"
  type = "HEALTH_MONITOR_HTTP"
  tenant_ref = data.avi_tenant.default_tenant.id
  receive_timeout = "4"
  is_federated = false
  failed_checks = "3"
  send_interval = "10"
  http_monitor {
        exact_http_request = false
        http_request = "HEAD / HTTP/1.0"
        http_response_code = ["HTTP_2XX","HTTP_3XX","HTTP_4XX"]
        }
  successful_checks = "3"
}

resource "avi_pool" "site01_pool-1" {
  name = "site01_pool-1"
  health_monitor_refs = [avi_healthmonitor.site01_hm_1.id]
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site01_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  lb_algorithm = "LB_ALGORITHM_LEAST_CONNECTIONS"
}

resource "avi_pool" "site01_pool-2" {
  name = "site01_pool-2"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref = data.avi_cloud.default_cloud.id
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site01_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  ignore_servers = true
}

resource "avi_poolgroup" "site01_pg-1" {
  name = "site01_pg-1"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref = data.avi_cloud.default_cloud.id
  members {
    pool_ref = avi_pool.site01_pool-1.id
    ratio = 100
    deployment_state = "IN_SERVICE"
  }
  members {
    pool_ref = avi_pool.site01_pool-2.id
    ratio = 0
    deployment_state = "OUT_OF_SERVICE"
  }
}

resource "avi_server" "site01_server_web11" {
  ip       = var.avi_site01_server_web11
  port     = "80"
  pool_ref = avi_pool.site01_pool-1.id
  hostname = "server_web11"
}

resource "avi_server" "site01_server_web12" {
  ip       = var.avi_site01_server_web12
  port     = "80"
  pool_ref = avi_pool.site01_pool-1.id
  hostname = "server_web12"
}

resource "avi_server" "site01_server_web13" {
  ip       = var.avi_site01_server_web13
  port     = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_webv13"
}

resource "avi_server" "site01_server_web14" {
  ip       = var.avi_site01_server_web14
  port     = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_web14"
}

resource "avi_server" "site01_server_web15" {
  ip = var.avi_site01_server_web15
  port = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_web15"
}

### End of Site01 setup ###
### Start of Site02 setup ###
resource "avi_sslprofile" "site02_sslprofile" {
    provider = avi.site02
    name = "site02_sslprofile"
    ssl_session_timeout = 86400
    tenant_ref = data.avi_tenant.default_tenant.id
    accepted_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA"
    prefer_client_cipher_ordering = false
    enable_ssl_session_reuse = true
    accepted_versions {
      type = "SSL_VERSION_TLS1_1"
    }
    accepted_versions {
      type = "SSL_VERSION_TLS1_2"
    }
    cipher_enums = [
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"]
    send_close_notify = true
    type = "SSL_PROFILE_TYPE_APPLICATION"
    enable_early_data = false
    ssl_rating {
      compatibility_rating = "SSL_SCORE_EXCELLENT"
      security_score = 100.0
      performance_rating = "SSL_SCORE_EXCELLENT"
    }
  }


resource "avi_applicationpersistenceprofile" "site02_applicationpersistenceprofile" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name  = "site02_app-pers-profile"
  is_federated = false
  persistence_type = "PERSISTENCE_TYPE_HTTP_COOKIE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_vsvip" "site02_vs01_vip" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_vs01_vip"
  vip {
    vip_id = "0"
    ip_address {
      type = "V4"
      addr = var.gslb_site02_vs01_vip
    }
  }
}

resource "avi_sslkeyandcertificate" "site02_cert1000" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_cert1000"
  certificate {
      certificate = file("${path.module}/www.sddc.vmconaws.link.crt")
      }
  key = file("${path.module}/www.sddc.vmconaws.link.key")
  type= "SSL_CERTIFICATE_TYPE_VIRTUALSERVICE"
}

resource "avi_virtualservice" "gslb_site02_vs01" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "gslb_site02_vs01"
  pool_group_ref = avi_poolgroup.site02_pg-1.id
  vsvip_ref  = avi_vsvip.site02_vs01_vip.id
  application_profile_ref = data.avi_applicationprofile.site02_system_https_profile.id
  services {
        port = 443
        enable_ssl = true
        port_range_end = 443
        }
  cloud_type = "CLOUD_VCENTER"
  ssl_key_and_certificate_refs = [avi_sslkeyandcertificate.site02_cert1000.id]
  ssl_profile_ref = avi_sslprofile.site02_sslprofile.id
}

resource "avi_healthmonitor" "site02_hm_1" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_monitor"
  type  = "HEALTH_MONITOR_HTTP"
  receive_timeout = "4"
  is_federated = false
  failed_checks = "3"
  send_interval = "10"
  http_monitor {
        exact_http_request = false
        http_request = "HEAD / HTTP/1.0"
        http_response_code = ["HTTP_2XX","HTTP_3XX","HTTP_4XX"]
        }
  successful_checks = "3"
}

resource "avi_pool" "site02_pool-1" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pool-1"
  health_monitor_refs = [avi_healthmonitor.site02_hm_1.id]
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site02_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  lb_algorithm = "LB_ALGORITHM_LEAST_CONNECTIONS"
}

resource "avi_pool" "site02_pool-2" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pool-2"
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site02_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  ignore_servers = true
}

resource "avi_poolgroup" "site02_pg-1" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pg-1"
  members {
    pool_ref = avi_pool.site02_pool-1.id
    ratio = 100
    deployment_state = "IN_SERVICE"
  }
  members {
    pool_ref = avi_pool.site02_pool-2.id
    ratio = 0
    deployment_state = "OUT_OF_SERVICE"
  }
}

resource "avi_server" "site02_server_web21" {
  provider = avi.site02
  ip = var.avi_site02_server_web21
  port = "80"
  pool_ref = avi_pool.site02_pool-1.id
  hostname = "serverp_web21"
}

resource "avi_server" "site02_server_web22" {
  provider = avi.site02
  ip = var.avi_site02_server_web22
  port = "80"
  pool_ref = avi_pool.site02_pool-1.id
  hostname = "server_web22"
}


resource "avi_server" "site02_server_web23" {
  provider = avi.site02
  ip = var.avi_site02_server_web23
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web23"
}

resource "avi_server" "site02_server_web24" {
  provider = avi.site02
  ip = var.avi_site02_server_web24
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web24"
}

resource "avi_server" "site02_server_web25" {
  provider = avi.site02
  ip = var.avi_site02_server_web25
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web25"
}

### END of Site02 Setting ###

### Start of GSLB setup ###

# Only one federated PKI Profile is required for one site or DC
resource "avi_pkiprofile" "terraform_gslb_pki" {
    name = "terraform_gslb_pki"
    tenant_ref = data.avi_tenant.default_tenant.id
    crl_check = false
    is_federated = true
    ignore_peer_chain = false
    validate_only_leaf_crl = true
    ca_certs {
      certificate = file("${path.module}/ca-bundle.crt")
    }
}

resource "avi_applicationpersistenceprofile" "terraform_gslbsite_pesistence" {
  name = "terraform_gslbsite_pesistence"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = true
  persistence_type = "PERSISTENCE_TYPE_GSLB_SITE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_healthmonitor" "terraform_gslbsite_hm01" {
  name = "terraform_gslbsite_hm01"
  type = "HEALTH_MONITOR_PING"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = true
  failed_checks = "3"
  send_interval = "10"
  successful_checks = "3"
}

resource "avi_gslbservice" "terraform_gslb-01" {
  name = "terraform_gslb-01"
  tenant_ref = data.avi_tenant.default_tenant.id
  domain_names = [var.gslb_dns]
  depends_on = [
    avi_pkiprofile.terraform_gslb_pki
  ]
  wildcard_match = false
  application_persistence_profile_ref = avi_applicationpersistenceprofile.terraform_gslbsite_pesistence.id
  health_monitor_refs = [avi_healthmonitor.terraform_gslbsite_hm01.id]
  site_persistence_enabled = true
  is_federated = false
  use_edns_client_subnet= true
  enabled = true
  groups { 
      priority = 10
      consistent_hash_mask=31
      consistent_hash_mask6=31
      members {
        ip {
           type = "V4"
           addr = var.gslb_site01_vs01_vip
        }
        vs_uuid = avi_virtualservice.gslb_site01_vs01.uuid
        cluster_uuid = element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid, index(data.avi_gslb.gslb_demo.sites.*.name,var.site01_name))
        ratio = 1
        enabled = true
      }
     members {
        ip {
           type = "V4"
           addr = var.gslb_site02_vs01_vip
        }
        vs_uuid = avi_virtualservice.gslb_site02_vs01.uuid
        cluster_uuid = element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid, index(data.avi_gslb.gslb_demo.sites.*.name,var.site02_name))
        ratio = 1
        enabled = true
      }
      name = "${var.gslb_dns}-pool"
      algorithm = "GSLB_ALGORITHM_ROUND_ROBIN"      
    }
}
### Output ###
output "gslb-site01_site_number" {
  value = "${index(data.avi_gslb.gslb_demo.sites.*.name,var.site01_name)}"
  description = "gslb-site01_site_number"
}

output "gslb-site02_site_number" {
  value = "${index(data.avi_gslb.gslb_demo.sites.*.name,var.site02_name)}"
  description = "gslb-site02_site_number"
}

output "gslb_site01" {
  value = "${element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid,0)}"
  description = "gslb_site01"
}

output "gslb_site02" {
  value = "${element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid,1)}"
  description = "gslb_site02"
}

output "gslb_service" {
  value = avi_gslbservice.terraform_gslb-01.groups
  description = "gslb_service"
}

output "site01_vs01" {
  value = avi_virtualservice.gslb_site01_vs01
  description = "site01_vs01"
}

output "site02_vs01" {
  value = avi_virtualservice.gslb_site02_vs01
  description = "site02_vs01"
}

Let’s apply the plan and then we can take it easy and enjoy the day.

zhangda@zhangda-a01 automation % terraform apply --auto-approve
data.avi_virtualservice.site01_vs01: Refreshing state...
data.avi_tenant.site02_default_tenant: Refreshing state...
data.avi_gslb.gslb_demo: Refreshing state...
data.avi_virtualservice.site02_vs01: Refreshing state...
data.avi_cloud.site02_default_cloud: Refreshing state...
data.avi_tenant.default_tenant: Refreshing state...
data.avi_cloud.default_cloud: Refreshing state...
data.avi_applicationprofile.site02_system_https_profile: Refreshing state...
data.avi_applicationprofile.site01_system_https_profile: Refreshing state...
data.avi_serviceenginegroup.se_group: Refreshing state...
avi_applicationpersistenceprofile.site02_applicationpersistenceprofile: Creating...
avi_healthmonitor.site02_hm_1: Creating...
avi_sslkeyandcertificate.site02_cert1000: Creating...
avi_vsvip.site02_vs01_vip: Creating...
avi_sslprofile.site02_sslprofile: Creating...
avi_applicationpersistenceprofile.terraform_gslbsite_pesistence: Creating...
avi_healthmonitor.site01_hm_1: Creating...
avi_healthmonitor.terraform_gslbsite_hm01: Creating...
avi_vsvip.site01_vs01_vip: Creating...
avi_pkiprofile.terraform_gslb_pki: Creating...
avi_healthmonitor.site02_hm_1: Creation complete after 1s [id=https://10.1.1.170/api/healthmonitor/healthmonitor-f05a117d-93fe-4a35-b442-391bc815ff8d]
avi_sslprofile.site01_sslprofile: Creating...
avi_applicationpersistenceprofile.site02_applicationpersistenceprofile: Creation complete after 1s [id=https://10.1.1.170/api/applicationpersistenceprofile/applicationpersistenceprofile-2cd82839-0b86-4a25-a212-694c3b8b41b9]
avi_applicationpersistenceprofile.site01_applicationpersistenceprofile: Creating...
avi_sslprofile.site02_sslprofile: Creation complete after 2s [id=https://10.1.1.170/api/sslprofile/sslprofile-fa44f77c-dfe0-494a-902b-e724980d139e]
avi_sslkeyandcertificate.site01_cert1000: Creating...
avi_vsvip.site02_vs01_vip: Creation complete after 2s [id=https://10.1.1.170/api/vsvip/vsvip-2391e848-1b49-4383-ab7a-b2829c6c5406]
avi_pool.site02_pool-1: Creating...
avi_sslkeyandcertificate.site02_cert1000: Creation complete after 2s [id=https://10.1.1.170/api/sslkeyandcertificate/sslkeyandcertificate-90baec49-afa0-4ef3-974d-7357fef77e0d]
avi_pool.site02_pool-2: Creating...
avi_applicationpersistenceprofile.site01_applicationpersistenceprofile: Creation complete after 1s [id=https://10.1.1.250/api/applicationpersistenceprofile/applicationpersistenceprofile-f45f0852-2515-4528-ae65-c48a670ca7ac]
avi_pool.site01_pool-2: Creating...
avi_pool.site02_pool-1: Creation complete after 0s [id=https://10.1.1.170/api/pool/pool-859248df-8ea6-4a00-a8ea-976cc31175a9]
avi_server.site02_server_web21: Creating...
avi_applicationpersistenceprofile.terraform_gslbsite_pesistence: Creation complete after 3s [id=https://10.1.1.250/api/applicationpersistenceprofile/applicationpersistenceprofile-cf887192-0d57-4b91-a7cb-37d787f9aeb2]
avi_server.site02_server_web22: Creating...
avi_sslprofile.site01_sslprofile: Creation complete after 2s [id=https://10.1.1.250/api/sslprofile/sslprofile-1464ded3-7a10-4e76-bfc3-0cdb186ff248]
avi_server.site02_server_web22: Creation complete after 0s [id=pool-859248df-8ea6-4a00-a8ea-976cc31175a9:192.168.202.20:80]
avi_healthmonitor.terraform_gslbsite_hm01: Creation complete after 4s [id=https://10.1.1.250/api/healthmonitor/healthmonitor-003f5015-2a2a-4e65-aff3-1071365a8428]
avi_healthmonitor.site01_hm_1: Creation complete after 4s [id=https://10.1.1.250/api/healthmonitor/healthmonitor-dacd7a40-dc90-4e67-932f-34e94a550fb8]
avi_pool.site01_pool-1: Creating...
avi_vsvip.site01_vs01_vip: Creation complete after 4s [id=https://10.1.1.250/api/vsvip/vsvip-16b0ba87-2703-4fb2-abab-9a8b0bf34ae0]
avi_pool.site02_pool-2: Creation complete after 2s [id=https://10.1.1.170/api/pool/pool-9ca21978-59d5-455f-ba78-01fb9c747b43]
avi_pool.site01_pool-2: Creation complete after 2s [id=https://10.1.1.250/api/pool/pool-47d64222-46b7-4402-ae38-afd47f3f5272]
avi_server.site02_server_web24: Creating...
avi_server.site02_server_web25: Creating...
avi_server.site02_server_web23: Creating...
avi_pool.site01_pool-1: Creation complete after 0s [id=https://10.1.1.250/api/pool/pool-e3c37b13-0950-4320-a643-afa5d3177624]
avi_poolgroup.site02_pg-1: Creating...
avi_server.site01_server_web14: Creating...
avi_server.site01_server_web15: Creating...
avi_server.site01_server_web13: Creating...
avi_poolgroup.site02_pg-1: Creation complete after 1s [id=https://10.1.1.170/api/poolgroup/poolgroup-4197b0b4-d486-455e-8583-bff1fc173fb8]
avi_server.site02_server_web23: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.30:80]
avi_poolgroup.site01_pg-1: Creating...
avi_server.site01_server_web11: Creating...
avi_server.site02_server_web21: Creation complete after 3s [id=pool-859248df-8ea6-4a00-a8ea-976cc31175a9:192.168.202.10:80]
avi_server.site01_server_web12: Creating...
avi_server.site02_server_web25: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.50:80]
avi_virtualservice.gslb_site02_vs01: Creating...
avi_server.site01_server_web13: Creation complete after 1s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.30:80]
avi_server.site02_server_web24: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.40:80]
avi_server.site01_server_web14: Creation complete after 1s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.40:80]
avi_sslkeyandcertificate.site01_cert1000: Creation complete after 3s [id=https://10.1.1.250/api/sslkeyandcertificate/sslkeyandcertificate-1963b9c2-7402-4d32-88f7-b8b57d7bf1e5]
avi_virtualservice.gslb_site02_vs01: Creation complete after 0s [id=https://10.1.1.170/api/virtualservice/virtualservice-310ba2ed-f48f-4a0d-a29e-71a2b9dd2567]
avi_poolgroup.site01_pg-1: Creation complete after 0s [id=https://10.1.1.250/api/poolgroup/poolgroup-21284b51-1f7d-41e3-83c3-078800fdea1d]
avi_virtualservice.gslb_site01_vs01: Creating...
avi_server.site01_server_web15: Creation complete after 2s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.50:80]
avi_server.site01_server_web11: Creation complete after 1s [id=pool-e3c37b13-0950-4320-a643-afa5d3177624:192.168.101.10:80]
avi_server.site01_server_web12: Creation complete after 1s [id=pool-e3c37b13-0950-4320-a643-afa5d3177624:192.168.101.20:80]
avi_virtualservice.gslb_site01_vs01: Creation complete after 1s [id=https://10.1.1.250/api/virtualservice/virtualservice-fbecfed3-2397-4df8-9b76-659f50fcc5f8]
avi_pkiprofile.terraform_gslb_pki: Still creating... [10s elapsed]
avi_pkiprofile.terraform_gslb_pki: Creation complete after 11s [id=https://10.1.1.250/api/pkiprofile/pkiprofile-4333ded8-6ec5-43d0-a677-d68a632bc523]
avi_gslbservice.terraform_gslb-01: Creating...
avi_gslbservice.terraform_gslb-01: Creation complete after 2s [id=https://10.1.1.250/api/gslbservice/gslbservice-38f887ef-87ed-446d-a66f-83d42da39289]

Apply complete! Resources: 32 added, 0 changed, 0 destroyed.

This is the end of this blog. Thank you for reading!😀

Setting Up L2VPN in VMC on AWS

In VMC on AWS SDDC, you can extend your on-premise network to VMC SDDC via HCX or L2VPN.

In this blog, I will show you how to set up L2VPN in VMC on AWS to extend network VLAN 100 to SDDC.

This blog is for VMC SDDC, running at version 1.9, which is backed by NSX-T 2.5. The SDDC end will work as a L2VPN server and your on-premise NSX autonomous edge will work as a L2VPN client.

Prerequisite

  • UDP 500/4500 and ESP (IP Protocol) are allowed from the On-premise L2VPN client to the VMC SDDC L2VPN Server

Let’s start the setup from the VMC SDDC end.

Section 1: Set up L2VPN at VMC SDDC End

Step 1: Log in to your VMC Console, go to Networking & Security—>Network—>VPN—>Lay2 and click “ADD VPN TUNNEL”.

Select Public IP from the local IP Address drop-down and input the public IP of L2VPN’s remote end. As on-premise NSX Edge is behind a NATed device, the remote private IP is required. In my case, the remote private IP is 10.1.1.240.

Step 2: Create an extended network.

Go to Network—>Segment and add a new segment as below.

  • Segment Name: l2vpn;
  • Connectivity: Extended;
  • VPN Tunnel ID: 100 (please note that the tunnel ID needs to match the on-prem tunnel ID)

After the network segment is created, you will see the below in layer 2 VPN.

Now we can begin to download the AUTONOMOUS EDGE from the highlighted hyperlink above.

While the file is downloading, we can download the peer-code which will be used for authentication between on-premise L2VPN client and SDDC L2VPN server.

The downloaded config is similar to below:

[{"transport_tunnel_path":"/infra/tier-0s/vmc/locale-services/default/ipsec-vpn-services/default/sessions/7998a0c0-52b7-11ea-b949-d95049696f90","peer_code":"MCxiNmY2NTg1LHsic2l0ZU5hbWUiOiJMMlZQTiIsInNyY1RhcElwIjoiMTY5LjI1NC4yMC4yIiwiZHxxxxxxxxxxxxxxxxxGgxNCIsImVuY3J5cHRBbmREaWdlc3QiOiJhZXMtZ2NtL3NoYS0yNTYiLCJwc2siOiJOb25lIiwidHVubmVscyI6W3sibG9jYWxJZCI6IjEwLjEuMS4yNDAiLCJwZWVySWQiOiI1Mi4zMy4xMjAuMTk4IiwibG9jYWxWdGlJcCI6IjE2OS4yNTQuMzEuMjU0LzMwIn1dfQ=="}]

Section 2: Deploy and Setup On-premise NSX autonomous edge

Step 1: Prepare Port Groups.

Create 4 port-groups for NSX autonomous Edge.

  • pg-uplink (no vlan tagging)
  • pg-mgmt
  • pg-trunk01 (trunk)
  • pg-ha

We need to change the trunk port-group pg-trunk01 security setting to accept promiscuous mode and forged transmits. This is required for L2VPN.

Step 2: Deploy NSX Autonomous Edge

We follow the standard process to deploy an OVF template from your vCenter. In “Select Network” of the “Deploy OVF Template” wizard, map the right port-group to different networks. Please note Network 0 is always the management network port for the NSX autonomous edge. To make it simpler, I only deployed a single edge here.

The table below shows the interface/network/adapter mapping relationship in different systems/UI under my setup.

Edge CLIEdge VM vNICOVF TemplateEdge GUIPurpose
eth0Network Adapter1Network 0ManagementManagement
fp-eth0Network Adapter2Network 1eth1Uplink
fp-eth1Network Adapter3Network 2eth2Trunk
fp-eth2Network Adapter4Network 3eth3HA

In the “Customize template” section, provide the password for the root, admin and auditor.

Input hostame(l2vpnclient), management IP (10.1.1.241), gateway (10.1.1.1) and network mask (255.255.255.0).

Input DNS and NTP setting:

Provide the input for external port:

  • Port: 0,eth1,10.1.1.240,24.
    • VLAN 0 means no VLAN tagging for this port.
    • eth1 means that the external port will be attached to eth1 which is network 1/pg-uplink port group.
    • IP address: 10.1.1.240
    • Prefix length: 24

There is no need to set up the Internal Port for the autonomous edge deployment. So I left it as blank.

Step 3: Autonomous Edge Setup

After the edge is deployed and powered on, you can log in to the edge UI via https://10.1.1.241.

Go to L2VPN and add a L2VPN session, input the Local IP (10.1.1.240), Remote IP (SDDC public IP) and Peer Code which I got from the downloaded config in section 1.

Go to Port and add port:

  • Port Name: vlan100
  • Subnet: leave as blank
  • VLAN: 100
  • Exit Interface: eth2 (Note: eth2 is connected to the port-group pg-trunk01).

Then go back to L2VPN and attach the newly created port VLAN100 to the L2VPN session as below. Please note that the Tunnel ID is 100, which is the same tunnel ID as the SDDC end.

After the port is attached successfully, we will see something similar to below.

This is the end of this blog. Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Authentication with Azure AD

The Federated Identity feature of VMware Cloud on AWS can be integrated with Microsoft Azure AD as well. In this integration model, the customer dedicated vIDM tenant will work as the SAML Service Provider and the Azure AD will work as the IdP.

Disclaimer:

The Azure AD settings in this blog are to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirements.

Note: please complete the vIDM connector installation and the vIDM tenant basic setup as per my first blog of this series (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/) before continuing.

The AD domain name for this blog is davidwzhang.com.

Prerequisite

  • The on-prem domain has been added as a custom domain under your default Azure AD.
  • Azure AD PREMIUM (P1 or P2) feature is enabled.
  • On-prem AD users/groups are synced with Azure AD.

Step 1: Add Azure AD as an IdP

First, login to your Azure Portal https://portal.azure.com and select Azure Active Directory.

Find “Enterprise Applications” in the list under my default Azure Active Directory (vmconaws) and then create a “New Application”. In the “Add your own app” window, select “Non-gallery application” and input the application name “csp-vidm” and click the “Add” button as below.

In the popped up “Enterprise Application” window, select “Set up single sign-on” under “Getting Started”.

In the pop-up “Select a single sign-on method” window, select SAML.

Next, click the Download hyperlink to download the Azure AD federation metadata XML file or copy the App Federation Metadata URL.

Now we have to pause here. You may have noticed that we still have two pending SAML configuration items: Identifier (Entity ID) and Reply URL (Assertion Consumer Service URL). We will come back to complete these items once we get the required SAML Service Provider metadata from the vIDM tenant.

Step 2: Create an Identity Provider in vIDM Tenant

Go to the vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.

Input the parameters as below:

  • Identity Provider Name: AzureAD
  • SAML AuthN Request Binding: HTTP Redirect

In the SAML Metadata section, copy the content of downloaded Azure AD federation metadata XML file to the text box of Identity Provider Metadata(URL or XML) then click the button “Process IdP Metadata”.

Next, add two Name ID format mappings as the below:

  • urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress = emails
  • urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified = uerName

Then from the dropdown of the “Name ID policy in SAML request (Optional)”, select “urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress”

The rest of the settings are as below:

  • Just-in-Time User Provisioning: Disabled
  • Users: davidwzhang.com
  • Network: ALL Ranges
  • Authentication Methods
    • Authentication Methods: AzureADPassword
    • SAML Context: urn:oasis:names:tc:SAML:2.0:ac:classes:Password 
  • Single Sign-Out Configuration: Disabled

Then click the hyperlink of “Service Provider (SP) Metadata” to open the SAML Service Provider metadata in a browser. Then you can find the Identifier (Entity ID) and Reply URL (Assertion Consumer Service URL) which are required to complete the Azure AD IdP SAML setup.

Now, click the “Save” button to save the IdP setup.

Step 3: Add SAML Information for Azure AD

Go back to Azure Portal and continue the Single Sign-on SAML setup.

Edit the basic SAML configuration.

Copy the Identifier (Entity ID) and Reply URL (Assertion Consumer Service URL) which we got in the last step, to the corresponding text box and save the configuration.

Next, add user groups which are granted access to the VMware Cloud service console to this newly created application. Here two groups (sddc-admins and sddc-operators) are added.

Now we have finished the configuration of Azure AD IdP.

Step 4: Update Authentication Policy

Update the vIDM tenant’s default access policy to include AzureADPassword as the first authentication method.

Now, when you try to log into the VMware Cloud service console with your AD account, your login session will be redirected to the Azure AD as below, which authenticates your session.

Setting Up Federated Identity Management for VMC on AWS – Authentication with ADFS

The Federated Identity feature of VMware Cloud on AWS can be integrated with Microsoft Active Directory Federation Services (ADFS). In this integration model, the customer dedicated vIDM tenant will work as the SAML Service Provider and the ADFS will work as the IdP.

Disclaimer:

The ADFS settings in this blog are to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirements.

Note: please complete the vIDM connector installation and the vIDM tenant basic setup as per my first blog of this series (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/) before continuing.

The AD domain name for this blog is davidwzhang.com.

The ADFS is installed on a Windows 2016 standard server. The DNS name for ADFS service is adfs1.davidwzhang.com.

Step 1: Get the ADFS Metadata Information.

The URL to this ADFS metadata is https://adfs1.davidwzhang.com/FederationMetadata/2007-06/FederationMetadata.xml. Click the URL to download the metadata XML file.

Step 2: Create an Identity Provider in vIDM Tenant

Go to the vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.

Input the parameters as below:

  • Identity Provider Name: ADFS01
  • SAML AuthN Request Binding: HTTP Redirect

In the SAML Metadata section, copy the content of XML file FederationMetadata.xml to the text box of the Identity Provider Metadata (URL or XML) then click the button “Process IdP Metadata”. The Name ID Format section will have 3 entries popped up.

Now, click the green plus icon to add a new mapping as the below:

urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified – userName.

Then from the dropdown of the “Name ID policy in SAML request (Optional)”, select “urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified”

The rest of the settings are as below:

  • Just-in-Time User Provisioning: Disabled
  • Users: davidwzhang.com
  • Network: ALL Ranges
  • Authentication Methods
    • Authentication Methods: ADFS Password
    • SAML Context: urn:oasis:names:tc:SAML:2.0:ac:classes:Password 
  • Single Sign-Out Configuration: Disabled

Then click the hyperlink of “Service Provider (SP) Metadata” to download the Service Provider metadata and upload it to the ADFS server.

Step 3: Update Authentication Policy

Update the vIDM tenant’s default access policy to include ADFS as the first authentication method.

Step 4: Configure ADFS Relying Party

Now it is time to setup the ADFS. First, start the ADFS Management tool from Server Manager and add a Relying Party Trust as the below.

In the welcome screen of “Add Relying Party Trust” wizard, select “Claim aware” and click the Start button.

Select the uploaded sp.xml file in step 2 to import data about the relying party and then click the Next button.

Input “vmccsa-demo” as the relying party display name, then click the Next button.

Choose “Permit everyone” as the Access Control Policy and click the Next button. Note: please select different access control policies for your use case.

Click the Next button.

Now, click the Close button to finish the wizard.

Step 5: Create ADFS Relying Party Claim Rules

Select the newly created relying party, right click and select “Edit Claim Issue Policy…”.

Click the “Add Rule…” button to add a new Transform rule.

In the “Choose Rule Type” step of the “Add Transform Claim Rule” wizard, select “Select LDAP Attributes as Claims” and click the Next button.

Input the following parameters:

  • Claim rule name: Get Attribute Email Address
  • Attribute store: Active Directory
  • Mapping of LDAP attributes to outgoing claim types:
    • LDAP Attribute: E-Mail-Addresses
    • Outgoing Claim Type: E-Mail Addresses

Click the Finish button. Then add the second new rule. The new rule will use “Send Claims Using a Custom Rule” as its rule template.

Next, input the name and custom rule:

The content of the “Custom rule” is as below. Please replace “xxxxx.vidmreview.com” with your own vIDM tenant URL.

c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"]
 => issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/spnamequalifier"] = "xxxxx.vidmreview.com");

Now, when you try to log into the VMware Cloud service console with your AD account, your login session will be redirected to the ADFS as below, which authenticates your session.

Automate Avi LB Service with Ansible

Avi Networks load balancing platform offers fantastic automation capabilities, which allow us to automate the load balancing service via some popular Infrastructure as Code tools like Ansible and Terraform. Today, I will demonstrate the Day 1 automation using Ansible (version 2.8.5) in this blog.

[root@code1 ~]# ansible --version
ansible 2.8.5
  config file = /etc/ansible/ansible.cfg
  configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python2.7/site-packages/ansible
  executable location = /usr/bin/ansible
  python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

The link below lists all available Ansible modules for Avi LB automation:

https://docs.ansible.com/ansible/latest/modules/list_of_network_modules.html#avi

Avi have developed a role called avisdk to package all Avi Ansible modules, which eases our lives further. To install this Avi avisdk Ansible role, just run the CLI below:

pip install avisdk

In this blog, we will automate the local load balancing configuration which we configured manually in my other blog: Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part2.

In summary, we are going to:

  • Create an HTTP Health Monitor (sddc01-vs02-hm01).
  • Create an Application Persistence profile (sddc01-vs02-persistence01) based on HTTP cookie.
  • Create a local load balancing pool (sddc01-vs02-pool01) with 2 pool members. The pool member health status will be checked with the newly created health monitor and the session persistence associated with this pool is based on the cookie persistence profile which we defined in the previous step.
  • Create an HTTP application profile (sddc01-vs02-profile-http01) which enables the compression and HTTP to HTTPs redirect.
  • Create an SSL profile (sddc01-vs02-profile-ssl01) which only allows securer ciphers and TLS 1.1 and 1.2.
  • Create a local load balancing service which leverages the newly created HTTP application profile and SSL profile to distribute the traffic to the load balancing pool sddc01-vs02-pool01.

The completed Ansible playbook is as below:

---
- hosts: localhost
  connection: local
  vars:
    controller: 192.168.80.3
    username: admin
    password: Password
    api_version: 18.2.5
    vs_name: sddc01-vs02
    vs_vip: 192.168.96.110
    vs_serviceport01: 443
    vs_serviceport02: 80
    pool_name: sddc01-vs02-pool01
    pool_member01: 192.168.96.25
    pool_member01_hostname: centos01
    pool_member02: 192.168.96.26
    pool_member02_hostname: centos02
    httpprofile_name: sddc01-vs02-profile-http01
    healthmonitor_name: sddc01-vs02-hm01
    cookie_name: sddc01-vs02-cookie01
    persistence_name: sddc01-vs02-persistence01
    certificate_name: www.sddc.vmconaws.link
    sslprofile_name: sddc01-vs02-profile-ssl01
  roles:
    - avinetworks.avisdk
  tasks:
    - name: Create HTTP Health Monitor
      avi_healthmonitor:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{api_version}}"
        state: present
        name: "{{ healthmonitor_name }}"
        http_monitor:
          http_request: 'HEAD / HTTP/1.0'
          http_response_code:
            - HTTP_2XX
            - HTTP_3XX
        receive_timeout: 4
        failed_checks: 3
        send_interval: 10
        successful_checks: 3
        is_federated: false
        type: HEALTH_MONITOR_HTTP
    - name: Create an Application Persistence setting using http cookie
      avi_applicationpersistenceprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{api_version}}"
        http_cookie_persistence_profile:
          always_send_cookie: false
          cookie_name: "{{ cookie_name }}"
          timeout: 15
        name: "{{ persistence_name }}"
        persistence_type: PERSISTENCE_TYPE_HTTP_COOKIE
        server_hm_down_recovery: HM_DOWN_PICK_NEW_SERVER
    - name: Create local load balancing pool
      avi_pool:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ pool_name }}"
        state: present
        application_persistence_profile_ref: '/api/applicationpersistenceprofile?name={{ persistence_name }}'
        health_monitor_refs:
          - '/api/healthmonitor?name={{ healthmonitor_name }}'
        lb_algorithm: LB_ALGORITHM_LEAST_CONNECTIONS
        servers:
          - ip:
              addr: "{{ pool_member01 }}"
              type: V4
            hostname: "{{ pool_member01_hostname }}"
          - ip:
              addr: "{{ pool_member02 }}"
              type: V4
            hostname: "{{ pool_member02_hostname }}"
    - name: Create an HTTP application profile
      avi_applicationprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        state: present
        http_profile:
          compression_profile:
            compressible_content_ref: '/api/stringgroup?name=System-Compressible-Content-Types'
            compression: true 
            remove_accept_encoding_header: true 
            type: AUTO_COMPRESSION
          connection_multiplexing_enabled: true 
          disable_keepalive_posts_msie6: true 
          disable_sni_hostname_check: false 
          enable_fire_and_forget: false 
          enable_request_body_buffering: false 
          enable_request_body_metrics: false 
          fwd_close_hdr_for_bound_connections: true 
          hsts_enabled: false 
          hsts_max_age: 365 
          hsts_subdomains_enabled: true 
          http2_enabled: false 
          http_to_https: true 
          httponly_enabled: false 
          keepalive_header: false 
          keepalive_timeout: 40000 
          max_bad_rps_cip: 0 
          max_bad_rps_cip_uri: 0 
          max_bad_rps_uri: 0 
          max_keepalive_requests: 100 
          max_response_headers_size: 48 
          max_rps_cip: 0 
          max_rps_cip_uri: 0 
          max_rps_unknown_cip: 0 
          max_rps_unknown_uri: 0 
          max_rps_uri: 0 
          post_accept_timeout: 30000 
          respond_with_100_continue: true 
          secure_cookie_enabled: false 
          server_side_redirect_to_https: false 
          spdy_enabled: false 
          spdy_fwd_proxy_mode: false 
          ssl_client_certificate_mode: SSL_CLIENT_CERTIFICATE_NONE 
          ssl_everywhere_enabled: false 
          use_app_keepalive_timeout: false 
          websockets_enabled: true 
          x_forwarded_proto_enabled: false 
          xff_alternate_name: X-Forwarded-For 
          xff_enabled: true
        name: "{{ httpprofile_name }}"
        type: APPLICATION_PROFILE_TYPE_HTTP
    - name: Create SSL profile with list of allow ciphers and TLS version
      avi_sslprofile:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ sslprofile_name }}"
        accepted_ciphers: "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA"
        accepted_versions:
          - type: SSL_VERSION_TLS1_1
          - type: SSL_VERSION_TLS1_2
        send_close_notify: true
        ssl_rating:
          compatibility_rating: SSL_SCORE_EXCELLENT
          performance_rating: SSL_SCORE_EXCELLENT
          security_score: '100.0'
    - name: Create a virtual service
      avi_virtualservice:
        controller: "{{ controller }}"
        username: "{{ username }}"
        password: "{{ password }}"
        api_version: "{{ api_version }}"
        name: "{{ vs_name }}"
        state: present
        performance_limits:
          max_concurrent_connections: 1000
        ssl_profile_ref: '/api/sslprofile?name={{ sslprofile_name }}'
        application_profile_ref: '/api/applicationprofile?name={{ httpprofile_name }}'
        ssl_key_and_certificate_refs:
          - '/api/sslkeyandcertificate?name={{ certificate_name }}'
        vip:
          - ip_address:
              addr: "{{ vs_vip }}"
              type: V4
            vip_id: 1
        services:
          - port: "{{ vs_serviceport01 }}"
            enable_ssl: true
          - port: "{{ vs_serviceport02 }}"
        pool_ref: '/api/pool?name={{ pool_name }}'

When we run the playbook, we can get all configurations completed in just 30 seconds which normally requires at least half hour.

Health Monitor:

Session Persistence:

Load Balancing Pool:

HTTP Application Profile:

SSL Profile:

Virtual Service:

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part4

This blog is Part 4 of this series. If you have not gone through the Part1, Part2 and Part3, please go and check them out now.

In Part3, we set up an active-active global load balancing service for our testing application (https://www.sddc.vmconaws.link).

Some applications require stickiness between a client and a server. That is to say, all requests in a long-lived transaction from a client must be sent to the same server; otherwise, the application session may be broken. Unfortunately, in an active-active GSLB setup, we cannot guarantee that a client session is always sent to the same back-end server in the following use case: a client which was initially served by a back-end server in SDDC01 may be redirected to SDDC02 when the new DNS query is resolved to the SDDC02 VIP after the DNS TTL is expired.

Avi Networks GSLB site cookie persistence is designed to handle the above use case. When the traffic is received by the AVi LB in SDDC02, the Avi LB checks the cookie within the request and finds out that the session is initially connected to SDDC01. So the SDDC02 Avi LB will work as a proxy, which will forward the client’s traffic to the SDDC01 Avi LB. Please note that the source IP of client traffic will be changed to the local load balancing virtual IP (In our case, the source IP will be 192.168.100.100.) by SDDC02 Avi LB before forwarding the traffic to SDDC01. This source NAT is requisite as it will ensure the return traffic from the back-end server will use the same path as the incoming traffic.

Step 1: Add a Federated PKI Profile

Go to Templates—>Security—>PKI Profile and click Create button to create a PKI profile. Input the parameters as below:

  • Name: gslb-pki-server
  • Enable CRL Check: No
  • Is Federated: Yes. That is to say that the PKI profile will be replicated across the federation: SDDC01 and SDDC02.
  • Certificate Authority: Add the self-signed certificate which we created in Part 2 as the CA. This will ensure that the AVi load balancer will trust the self-signed certificate presented by the peering SDDC when it works as a proxy for the client.

Step 2: Add a Federated Persistence Profile

Go to Templates—>Profiles—>Persistence and click Create button to create a GSLB persistence profile. Input the parameters as below:

  • Name: gslb-persistence01
  • Type: GSLB Site
  • Application Cookie Name: site-affinity-persistence
  • Is Federated: Yes. That is to say that the persistence profile will be replicated across the federation: SDDC01 and SDDC02.
  • Persistence Timeout: 30 mins

Step 3: Add a Federated Health Monitor

Go to Templates—>Profiles—>Health Monitors and click Create button to create a GSLB health monitor. Input the parameters as below:

  • Name: gslb-hm-https01
  • Type: HTTPS
  • Is Federated: Yes. That is to say that the health monitor will be replicated across the federation: SDDC01 and SDDC02.
  • Health Monitor Port: 443
  • Response Code: 2xx, 3xx

Step 4: Change the GSLB service

Go to Applications—>GSLB Service and edit the exiting GSLB service gslb-vs01.

  • Health Monitor: gslb-hm-https01
  • Site Persistence: Enabled
  • Site Cookie Application Persistence Profile: gslb-persistence01

After we have completed the above configuration, a new pool called SP-gslb-vs01-sddc01-vs01 is added into the local load balancing virtual service: sddc-vs01 on SDDC01 Avi LB.

When we check the member information of the new pool, the virtual IP of local virtual service in the peering SDDC (SDDC02) is shown as the only pool member. Please note that this pool is created by Avi LB automatically so the settings cannot be changed by any users.

Similarly, on SDDC02 Avi LB, a new pool called SP-gslb-vs01-sddc02-vs01 is created and added to the local load balancing virtual service: sddc02-vs01.

Let’s verify our work.

First, the GSLB DNS resolves the DNS name (www.sddc.vmconaws.link) of our testing application to the VIP in SDDC01. When we input the URL https://www.sddc.vmconaws.link into the browser, we are served by centos02 in SDDC01 as expected.

Now change the DNS resolution to point to SDDC02 VIP. Go to our testing application again and we are still served by the same back-end server centos02 in SDDC01.

The cross-site traffic between Avi LBs can be verified via a packet capture on SDDC02 Avi LB. From the packet capture, we can see the HTTPs session which is destined for SDDC02 is now forwarded from SDDC02 to SDDC01 by SDDC02 Avi LB.

As GSLB site cookie is based on HTTP cookie, so there are a few restrictions with it:

  • Site persistence applies only to Avi VIPs.
  • Site persistence across multiple virtual services within the same Controller cluster is not supported.
  • For site persistence to be turned on for a global application, all of its individual members must run on active sites.
  • Site persistence applies only to HTTP or HTTPs traffic when Avi LB terminates TLS/SSL session.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part3

This blog is Part 3 of this series. If you have not gone through  Part1 and Part2, please go and check them out now.

In Part 1 and Part 2, we deployed the Avi Load Balancers and completed the local load balancing setup in VMC SDDC01. To achieve high availability across different SDDCs, global load balancing is required. In this blog, let’s set up an active-active global load balancing service for our testing web application so that the web servers in both two SDDCs can serve the client simultaneously.

Section 1: Infrastructure

Task 1: Follow Part1 and Part2 to deploy Avi load balancer and set up local load balancing in VMC SDDC02 as the above diagram.

  • Avi Controller Cluster
    • Cluster IP: 192.168.101.3
    • Controller Node1: 192.168.101.4
    • Controller Node2: 192.168.101.5
    • Controller Node3: 192.168.101.6
  • SE Engine
    • SE01: 192.168.101.10
    • SE02: 192.168.101.11
  • LB Virtual Service:
    • VIP: 192.168.100.100 with back-end member server Centos03 (192.168.100.25)
  • NAT: 52.26.167.214<->192.168.100.100

Task 2: Connectivity between VMC SDDC and TGW

Please refer to my friend Avnish Tripathi’s blog (https://vmtechie.blog/2019/09/15/connect-aws-transit-gateway-to-vmware-cloud-on-aws/) to connect VMC SDDC01 and VMC SDDC02 to AWS Transit Gateway with Route-based VPN.

Task 3: Set up NAT for DNS Service virtual IP in VMC Console

SDDC01: static NAT 54.201.246.64<->192.168.96.101. Here 192.168.96.101 is the DNS virtual service VIP in SDDC01.

SDDC02: static NAT 52.32.129.180<->192.168.100.101. Here 192.168.100.101 is the DNS virtual service VIP in SDDC02.

Task 4: Add Firewall rules for GSLB

  • Allow inter-SDDC traffic as the below in VMC console.
  • Allow DNS traffic from the Internet to GSLB DNS service virtual IP

Task5: DNS sub-domain delegation.

In the DNS server, delegate sub-domain (sddc.vmconaws.link) to the two DNS virtual service VIPs corresponding public IPs, which means the two to-be-defined DNS virtual services will work as the name servers of sub-domain.

Section 2: Enable GSLB

Task 1: Create a DNS virtual service

In the SDDC01 Avi Controller GUI, go to Application—>Virtual Services—>Create Virtual Service then input the parameters as below:

  • Name: sddc01-g-dns01
  • IPv4 VIP: 192.168.96.101
  • Application Profile: System-DNS
  • TCP/UDP Profile: System-UDP-Per-Pkt
  • Service Port: 53
  • Service Port: 53, override TCP/UDP with System-TCP-Proxy
  • Pool: leave blank

Leave the rest of the settings as default.

In SDDC02, create a similar DNS virtual service (sddc02-g-dns01) with VIP 192.168.100.101.

Task 2: GSLB Site

Avi uses GSLB sites to define different data centers. GSLB sites fall into two broad categories — Avi sites and external sites. This blog focuses on Avi sites. Each Avi site is characterized as either an active or a passive site. Active sites are further classified into two types — GSLB leader and followers. The active site from which the initial GSLB site configuration is performed, is the designated GSLB leader. GSLB configuration changes are permitted only by logging into the leader, which propagates those changes to all accessible followers. The only way to switch the leadership role to a follower is by overriding the configuration of the leader from a follower site. This override can be invoked in the case of site failures or for maintenance.

In our setup, SDDC01 will work as “Leader” site and SDDC02 will work as “Follower” site.

In SDDC01 Avi Controller GUI, go to Infrastructure—>GSLB and click the Edit icon to enable GSLB Service.

In the “New GSLB Configuration” window, input the parameters as below:

  • Name: sddc01-gslb
  • Username: admin
  • Password: Password for Avi Controller
  • GSLB Subdomain: sddc.vmconaws.link

Then click “Save and Set DNS Virtual Service.

Select the newly defined DNS service in task1 as “DNS Virtual Services” and configure “sddc.vmconaws.link” as the subdomain, then save the change.

Now the GSLB setup is as below.

Click “Add New Site” button to add SDDC02 as the second GSLB site. Then Input the parameters below in the “New GSLB Site” window:

  • Name: sddc02-gslb
  • Username: admin
  • Password: Password for Avi Controller
  • IP Address: 192.168.101.3 (SDDC02 Avi Cluster VIP)
  • Port: 443

Click “Save and Set DNS Virtual Services” button. Then select the newly defined DNS service in task1 as “DNS Virtual Services” and configure “sddc.vmconaws.link” as the subdomain, then save the change.

Now the GSLB Site configuration is completed. We can see that “sddc01-gslb” works as the “Leader” site and “sddc02-gslb” works as the “Active” site.

Typically, the VIP configured in a local virtual service (configured as a GSLB pool member) is a private IP address. In our configurations, the VIPs are 192.168.x.x. But these IP addresses are not reachable by the Internet client. To handle this, we have to enable NAT-aware Public-Private GSLB feature. Go to Infrastructure—>GSLB—>Active Members—>sddc01-gslb then click the Edit icon. In the advanced settings, input the following parameters:

  • Type: Private
  • IP Address:
    • 10.0.0.0/8
    • 172.16.0.0/15
    • 192.168.0.0/16

Section 3: Create a GSLB Service

We are ready to create a GSLB service for our application (www.sddc.vmconaws.link) now. To achieve active-active GSLB service and distribute the load evenly across 3 backend servers (2 in SDDC01 and 1 in SDDC02). We developed the following GSLB service design:

  • The GSLB service includes 1 GSLB pool.
  • There is one GSLB pool member in each SDDC.
  • Groups Load Balancing Algorithm: Priority-based
  • Pool Members Load Balancing Algorithm: Round Robin

Go to Application—>GSLB Services—>Create, click the “Advanced Setup” button. In the “New GSLB Service” input the following parameters:

  • Name: gslb-vs01
  • Application Name:www
  • Sudomain: .sddc.vmconaws.link
  • Health Monitor: System-GSLB-HTTPS
  • Group Load Balancing Algorithm: Priority-based
  • Health Monitor Scope: All Members
  • Controller Health Status: Enabled

Then click “Add Pool” and input the following parameters:

  • Name: gslb-vs01-pool
  • Pool Members Load Balancing Algorithm: Round Robin (Note this means the client will be sent to the local load balancer in both SDDC01 and SDDC02).
  • Pool Member (SDDC01):
    • Site Cluster Controller: sddc01-gslb
    • Virtual Service: sddc01-vs01
    • Public IP: 34.216.94.228
    • Ratio: 2 (The virtual service will receive 2/3 load.)
  • Pool Member (SDDC02):
    • Site Cluster Controller: sddc02-gslb
    • Virtual Service: sddc02-vs01
    • Public IP: 52.26.167.214
    • Ratio: 1 (The virtual service will receive 1/3 load.)

We will change the following parameters as well for this GSLB service.

Now we have completed the setup of active-active GSLB for our web service.

Let’s verify our work.

  • The GLSB DNS service will respond to the DNS query for DNS name http://www.sddc.vmconaws.link with the public IP of SDDC01 web virtual services or the public IP of SDDC02 web virtual service via the round-robin algorithm.
  • The web servers in both SDDCs serve the clients simultaneously.

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part2

This blog is Part 2 of this series. If you have not gone through the Part1, please go and check it out now.

In Part 2, we will demo how to set up a local load balancing virtual service for a web-based application on our deployed Avi load balancer. The IP Address allocation and network connectivity are shown below.

There are hundreds of features are available when setting up a local load balancing service in Avi load balancer. In this blog, we will focus on the widely used features in enterprise load balancing solution:

  • TLS/SSL Termination
  • Session Persistence
  • Health Monitor

Section 1: TLS/SSL Termination

The following deployment architectures are supported by Avi Load balancer (LB) for SSL:

  • None: SSL traffic is handled as pass-through (layer 4), flowing through Avi LB without terminating the encrypted traffic.
  • Client-side: Traffic from the client to Avi LB is encrypted, with unencrypted HTTP to the back-end servers.
  • Server-side: Traffic from the client to Avi LB is unencrypted HTTP, with encrypted HTTPS to the back-end servers.
  • Both: Traffic from the client to Avi LB is encrypted and terminated at Avi LB, which then re-encrypts traffic to the back-end server.
  • Intercept: Terminate client SSL traffic, send it unencrypted over the wire for taps to intercept, then encrypt to the destination server.

We will use Client-side deployment architecture here.

Step 1: Get or Generate a certificate

Please note that a CA signed certificate is highly recommended for any production system. We will use a self-signed certificate here for simplification. Go to Templates—>Security—SSL/TLS Certificate, which all installed certificates are listed. A self-signed certificate is shown, its subject name is http://www.sddc.vmconaws.link.

Step 2: Create a customized SSL/TLS profile

The system default SSL/TLS profile still includes the support for TLS 1.0, which is not considered as very secure protocol anymore. So, we will go to Templates—>Security—>SSL/TLS Profile to create a new SSL/TLS profile which excludes TLS 1.0 as below:

Section 2: Session Persistence

Cookie persistence is the most-common persistence mechanism for a web-based application. Here we will define a persistence profile for our testing web application. Go to Templates—>Profiles—>Persistence and click “Create” button, then input the parameters as below:

  • Name: sddc011-vs01-pp01
  • Type: HTTP Cookie
  • HTTP Cookie Name: vmconaws-demo
  • Persistence Timeout: 30mins

Please note that the cookie payload contains the back-end server IP address and port, which is encrypted with AES-256. 

Section 3: Health Monitor

Avi load balancer uses the health monitor to check if the back-end servers in the load balancing pool are healthy to provide the required service or not. There are two kinds of health monitors:

  • Active Health Monitor: Active health monitors send proactive queries to servers, synthetically mimicking a client. Send and receive timeout intervals may be defined, which statically determine the server response as successful or failed.
  • Passive Health Monitor: While active health monitors provide a binary good/bad analysis of server health, passive health monitors provide a more subtle check by attempting to understand and react to the client-to-server interaction. For example, if a server is quickly responding with valid responses (such as HTTP 200), then all is well; however, if the server is sending back errors (such as TCP resets or HTTP 5xx errors), the server is assumed to have errors. 

Only active health monitors may be edited. The passive monitor has no settings.

Note: Best practice is to enable both a passive and an active health monitor to each pool.

Let’s start to create an active health monitor for our application. Go to Templates—>Profiles—>Health Monitors and click “Create” button, then input the parameters as below:

  • Name: sddc01-vs01-hm01
  • Server Response Data: sddc01
  • Server Response Code: 2xx
  • Health Monitor Port: 80 (Please note that we don’t change the default setting here. But this option can be very useful for some cluster-based application)

Section 4: Create a Load Balancing Pool

Now it is time to create the load balancing pool. Go to Application—>Pools and click “Create Pool”. In the Step 1 window, input the parameters as below:

  • Load Balance: Least Connections
  • Persistence: sddc-vs01-pp01

Add an active health monitor: sddc01-vs01-hm01.

Add two member servers:

  • centos01: 192.168.96.25
  • centos02: 192.168.96.26

Section 5: Create a Virtual Service

We will use the “Advanced Setup” to create a virtual service for our web application.

In “Step 1: Setting” window, input the parameters as below:

We use the system pre-defined application profile “System-HTTP” as the applied Application Profile for simplification here. The “System-HTTP” profile includes comprehensive configuration options for a web application, which possibly requests a separated blog to cover. Let’s list a few here:

  • X-Forwarded-For: Avi SE will insert an X-Forwarded-For (XFF) header into the HTTP request headers when the request is passed to the server. This feature is enabled.
  • Preserve Client IP Address: Avi SE will use the client-IP rather than SNAT IP for load-balanced connections from the SE to back-end application servers. This feature is disabled.
  • HTTP-to-HTTPS Redirect: Client requests received via HTTP will be redirected to HTTPS. This feature is disabled.

Leave all settings as default for Step 2 and 3.

In “Step 4: Advanced”, input the parameters as below:

  • Use VIP as SNAT: enabled
  • SE Group: Default-Group

Section 5: VMC Setup

To enable user’s access to our testing web, two changes are required in the VMC SDDC.

  • Network Address Translation
  • A CGW firewall rule to allow traffic from the Internet to the LB VIP (192.168.96.100) on HTTPs

PIV

So far, we have completed all load balancing configurations. Let’s go to verify our work.

Application web page (https://www.sddc.vmconaws.link):

Session Persistence Cookie:

This is the end of this blog. Thank you very much for reading!

Build Load Balancing Service in VMC on AWS with Avi Load Balancer – Part1

When we design a highly available (HA) infrastructure for a mission-critical application, local load balancing and global load balancing are always the essential components of the solution. This series of blogs will demonstrate how to build an enterprise-level local load balancing and global load balancing service in VMC on AWS SDDC with Avi Networks load balancer.

This series of blogs will cover the following topics:

  1. How to deploy Avi load balancer in a VMC SDDC;
  2. How to set up local load balancing service to achieve HA within a VMC SDDC (https://davidwzhang.com/2019/09/21/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part2/)
  3. How to set up global load balancing service to achieve HA across different SDDCs which are in different AWS Availability Zones (https://davidwzhang.com/2019/09/30/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part3/)
  4. How to set up global load balancing site affinity (https://davidwzhang.com/2019/10/08/build-load-balancing-service-in-vmc-on-aws-with-avi-load-balancer-part4/)
  5. How to automate Avi LB with Ansible (https://davidwzhang.com/2019/10/14/automate-avi-lb-service-with-ansible/)

By the end of this series, we will complete an HA infrastructure build as the following diagram: this design leverages local load balancing service and global load balancing service to provide 99.99%+ SLA to a web-based mission-critical application.

The Avi load balancer platform is built on software-defined architectural principles which separate the data plane and control plane. The product components include:

  • Avi Controller (control plane) The Avi Controller stores and manages all policies related to services and management. HA of the Avi Controller requires 3 separate Controller instances, configured as a 3-node cluster
  • Avi Service Engines (data plane) Each Avi Service Engine runs on its own virtual machine. The Avi SEs provide the application delivery services to end-user traffic, and also collect real-time end-to-end metrics for traffic between end-users and applications.

In Part 1, we will cover the deployment of Avi load balancer. The diagram below shows the controller and service engine (SE) network connectivity and IP address allocation.

Depending on the level of vCenter access provided, Avi load balancer supports 3 modes of deployment. In VMC on AWS, only the “no-access” mode is supported. Please refer to https://avinetworks.com/docs/ for more information about Avi load balancer deployment modes in VMWare Cloud.

Section 1: Controller Cluster

Let’s start to deploy the Avi controllers and set up the controller cluster. First, download the ova package for the controller appliance. In this demo, the version of Avi load balancer controller is v18.2.5. After the download, deploy the controller virtual appliance via “Deploying OVF Template” wizard in VMC SDDC vCenter. In the “Customize template” window, input parameters as below:

  • Management interface IP: 192.168.80.4
  • Management interface Subnet mask: 255.255.255.0
  • Default gateway: 192.168.80.1
  • Sysadmin login authentication key: Password

After this 1st controller appliance is deployed and powered on, it is ready to start the controller initial configuration. Go to the controller management GUI https://192.168.80.4

(1) Username/Password

(2) DNS and NTP

(3) SMTP

(4) Multiple-Tenants? Select No here for simplification.

The initial configuration for the 1st controller is completed. As the first controller of the cluster, it will receive the “Leader” role. The second and third controller will work as “Follower”. When we are logged in the GUI of this first controller, go to Administration—>Controller, as shown below.

Similarly, go to deploy and perform the initial configuration for the 2nd (192.168.80.5) and 3rd controller (192.168.80.6).

In the management GUI of the 1st controller, go to Administration—>Controller and click “Edit”. In “Edit Controller Configuration” window, add the second node and third node into the cluster as below.

After a few minutes, the cluster is set up successfully.

Section 2: Service Engine

Now it is ready to deploy SE virtual appliances. In this demo, two SEs will be deployed. These 2 SEs are added into the default Sevice Engine Group with the default HA mode (N+M).

Step 1: Create and download the SE image.

Go to Infrastructure—>Clouds, click the download icon and select the ova format. Please note that this SE ova package is only for the linked controller cluster. It can not be used for another controller cluster.

Step 2: Get the cluster UUID and authentication token for SE deployment.

Step 3: In SDDC vCenter, run the “Deploy OVF Template” wizard to import SE ova package. In the “Customize template” window, the input parameters:

  • IP Address of the Avi Controller: 192.168.80.3 (cluster IP of the controller)
  • Authentication token for Avi Controller: as Step2
  • Controller Cluster UUID for Avi Controller: as Step 2
  • Management Interface IP Address: 192.168.80.10
  • Management Interface Subnet Mask: 255.255.255.0
  • Default Gateway: 192.168.80.1
  • DNS Information: 10.1.1.151
  • Sysadmin login authentication key: Password

Please note that the second vNIC will be used as the SE data interface.

Then continue to deploy the second SE (mgmt IP: 192.168.80.11/24).

The deployed SEs will register themself into the controller cluster as below.

Step 4: Now the SEs have established the control and management plane communication with the controller cluster. It is time to set up the SE’s data network.

During the setup, I found that the vNIC for virtual appliance VM and SE Ethernet Interface is not properly mapped, for example, the data interface is the 2nd vNIC of SE VM in vCenter but it is shown as Ethernet 5 in SE network setup. To get the correct mapping, the mac address of data vNIC will be leveraged. Go to SDDC vCenter and get the MAC address of SE data interface.

In the controller management GUI, go to Infrastructure—>Service Engine and edit the selected SE. In the interface list, select the correct interface which has the same mac address then provide the IP address and subnet mask.

The final step is to add a gateway for this data interface. Go to Infrastructure—>Routing—>Static Route and create a new static default route.

Tip: VM-VM anti-affinity policy is highly recommended to enhance the HA of the controller and service engine virtual appliances.

This is the end of the blog. Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Authentication with Okta IdP

The Federated Identity feature of VMware Cloud on AWS can be integrated with all 3rd party IdPs who support SAML version 2.0. In this integration model, the customer dedicated vIDM tenant will work as SAML Service Provider. If the 3rd party IdP is set up to perform multi-factor authentication (MFA), the customer will be prompted MFA for access to VMware Cloud services. In this blog, the integration with one of the most popular IdP Okta will be demoed.

Disclaimer:

The Okta IdP settings in this blog are to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirements.

Note: please complete the vIDM connector installation and the vIDM tenant basic setup as per my first blog of this series (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/) before continuing.

To add the same users and user groups in Okta IdP as the configured vIDM tenant, we need to integrate Okta with corporate Active Directory (AD). The integration is via Okta’s lightweight agent.

Click the “Directory Integration” in Okta UI.

Click “Add Active Directory”.

The Active Directory integration setup wizard will start and click “Set Up Active Directory”.

Download the agent as required in the below window.

This agent can be installed on a Windows Server 2008 R2 or later. The installation of this Okta agent is quite straightforward. Once the agent installation is completed, you need to perform the setup of this AD integration. In the basic setting window, select the Organizational Units (OUs) that you’d like to sync users or groups from and make sure that “Okta username format” is set up to use User Principle Name (UPN).

In the “Build User Profile” window, select any custom schema which needs to be included in the Okta user profile and click Next.

Click Done to finish the integration setup.

The Okta directory setting window will pop up.

Enable the Just-In-Time provisioning and set the Schedule Import to perform user import every hour. Review and save the setting.

Now go to the Import tab and click “Import Now” to import the users from corporate AD.

As it is the first time to import user/users from customer AD, select “Full Import” and click Import.

When the scan is finished, Okta will report the result. Click OK.

Select the user/users to be imported and confirm the user assignment. Note: the user jsmith@lab.local is imported here, who will be used for the final integration testing.

Now it is time to set up the SAML IdP in Okta.

Go to Okta Classic UI application tab and click “Add Application”

Click “Create New App”;

Select Web as the Platform and “SAML 2.0” for Sign on method and click Create;

Type in App name, “csp-vidm” is used as an example as the app name and click Next;

There are two configuration items in the popped up “Create SAML Integration” window which is mandatory. These information can be copied from Identity Provider setting within vIDM tenant.

Go to vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.

Type in the “Identity Provider Name”, here the example name is “Okta01”

Go to the bottom of this IdP creation window and click “Service Provider (SP) Metadata”.

A new window will pop up as the below:

The entity ID and HTTP-POST location are required information for Okta IdP SAML setting. Copy the entity ID URL link into the “Audience URI (SP Entity ID) and HTTP-POST location into “Single sign on URL” in the Okta “Create SAML Integration” window.

Leave all other configuration items as the default and click Next;

In the Feedback window, suggest the newly created app is an internal app and click Finish.

A “Sign On settings” window will pop up as below, click “Identity Provider metadata” link.

The XML file format of Identity Provider metadata shows up. Select all content of this XML file and copy.

Paste the Okta IdP metadata into SAML Metadata and click “Process IdP Metadata” in the vIDM 3rd party identity provider creation window.

The “SAML AuthN Request Binding” and “Name ID format mapping from SAML Response” will be updated as below:

Select “lab.local” directory as users who can authenticate with this new 3rd party IdP and leave the Network as default “All RANGES”. Then create a new authentication method called “Okta Auth” with SAML Context “urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtected“. Please note that the name of this newly created authentication method has to be different from any existing authentication method.

Then leave all other configuration items’ box unchecked and click Add.

The 3rd party IdP has been successfully added now.

The last step of vIDM set up for this Okta integration is updating the default access policy to use the newly defined authentication method “Okta Auth”. Please follow up the steps in my previous blog (https://wordpress.com/block-editor/post/davidwzhang.com/308) to perform the required update. The updated default access policy should be similar as below.

Before going to test the setup, go to Okta UI to assign user/s to the newly defined SAML 2.0 web application “csp-vidm”. Click Assignment.

Click Assign and select “Assign to People”.

In the “Assign csp-vidm to People” window, assign user John Smith (jsmith@lab.local), which means that the user John Smith is allowed by this SAML 2.0 application.

After the assignment is completed, user John Smith is under the assignment of this SAML 2.0 application “csp-vidm”.

Instead of assigning individual users, AD group/groups can be assigned to the SAML application as well.

Finally, everything is ready to test the integration.

Open a new Incognito window in a Chrome browser and type in the vIDM tenant URL then click Enter.

In the log in window, type user name jsmith@lab.local and click Next.

The authentication session is redirected to Okta.

Type in Username & Password and click “Sign In”.

Then John Smith (jsmith@lab.local) successfully logs in the vIDM tenant.

This is the end of this demo. Thank you very much for reading!