Zero Code NSX Advanced LB Automation with Terraform

VMware NSX Advanced Load Balancer (Avi Networks) provides multi-cloud load balancing, web application firewall, application analytics and container ingress services across on-premises data centers and any cloud.

Terraform is a widely adopted Infrastructure as Code tool that allows you to define your infrastructure using a simple, declarative programming language, and deploy and manage infrastructure across public cloud providers including AWS, Azure and Google Cloud. NSX Advanced load balancer (Aka Avi load balancer) are fully supported by Terraform and each Avi REST resource is exposed as a resource in Terraform. By using the Terraform Avi Provider, we can achieve Infrastructure as a code for your load balancing service.

In this blog, I will show you how easy it is to build an LBaaS service (local load balancing and global load balancing across two DCs) for a critical (99.99%+ SLA)web application on NSX advanced load balancer via Terraform in minutes.

My testing environment is set up as below:

  • Two DCs: site01 and site02;
  • There is a controller cluster in each site;
  • Two GSLB sites configured: site01 is the leader site.
  • Terraform v0.12
  • NSX Advanced load balancer: v18.2.9

The Terraform plan will create the following resources:

  • 5 web servers as a pool member in each DC;
  • Two local load balancing pools in each DC: the first 2 web servers are members of pool1 and the rest 3 web servers are members of pool2;
  • A pool group in each DC, which includes the above 2 pools: pool1 is In Service and pool2 is Out of Service
  • A virtual service in each DC to provide local load balancing
  • SSL profile in each DC to define how a SSL session is terminated on the NSX advanced load balancer;
  • HTTP Cookie-based persistence profile in each DC to offer web session persistence in the local load balancing;
  • Certificate and Key for the web application HTTPS service;
  • A HTTP health monitor in each DC to check the health of local load balancing pool members
  • Global load balancing PKI profile;
  • Global load balancing health monitor;
  • Global load balancing persistence profile;
  • Global load balancing service;

Also, a few outputs are defined to suggest the results of the Terraform plan.

You can access main.tf and variables.tf on GitHub here.

# For example, restrict template version in 0.1.x
provider "avi" {
  avi_username = "admin"
  avi_tenant = "admin"
  avi_password = "password"
  avi_controller= var.site1controller
}

provider "avi" {
  avi_username = "admin"
  avi_tenant = "admin"
  alias = "site02"
  avi_password = "password"
  avi_controller= var.site2controller
}

data "avi_tenant" "default_tenant" {
  name = "admin"
}

data "avi_cloud" "default_cloud" {
  name = "Default-Cloud"
}

data "avi_tenant" "site02_default_tenant" {
  provider = avi.site02
  name = "admin"
}

data "avi_cloud" "site02_default_cloud" {
  provider = avi.site02
  name = "Default-Cloud"
}

data "avi_serviceenginegroup" "se_group" {
  name      = "Default-Group"
  cloud_ref = data.avi_cloud.default_cloud.id
}

data "avi_gslb" "gslb_demo" {
  name = "Default"
}

data "avi_virtualservice" "site01_vs01" {
  name = "gslb_site01_vs01"
}

data "avi_virtualservice" "site02_vs01" {
  name = "gslb_site02_vs01"
}

data "avi_applicationprofile" "site01_system_https_profile" {
  name = "System-Secure-HTTP"
}

data "avi_applicationprofile" "site02_system_https_profile" {
  provider = avi.site02
  name = "System-Secure-HTTP"
}

### Start of Site01 setup
resource "avi_sslprofile" "site01_sslprofile" {
    name = "site01_sslprofile"
    ssl_session_timeout = 86400
    tenant_ref = data.avi_tenant.default_tenant.id
    accepted_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA"
    prefer_client_cipher_ordering = false
    enable_ssl_session_reuse = true
    accepted_versions {
      type = "SSL_VERSION_TLS1_1"
    }
    accepted_versions {
      type = "SSL_VERSION_TLS1_2"
    }
    cipher_enums = [
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"]
    send_close_notify = true
    type = "SSL_PROFILE_TYPE_APPLICATION"
    enable_early_data = false
    ssl_rating {
      compatibility_rating = "SSL_SCORE_EXCELLENT"
      security_score = 100.0
      performance_rating = "SSL_SCORE_EXCELLENT"
    }
  }

resource "avi_applicationpersistenceprofile" "site01_applicationpersistenceprofile" {
  name  = "site01_app-pers-profile"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = false
  persistence_type = "PERSISTENCE_TYPE_HTTP_COOKIE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_vsvip" "site01_vs01_vip" {
  name = "site01_vs01_vip"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  vip {
    vip_id = "0"
    ip_address {
      type = "V4"
      addr = var.gslb_site01_vs01_vip
    }
  }
}

resource "avi_sslkeyandcertificate" "site01_cert1000" {
    name = "site01_cert1000"
    tenant_ref = data.avi_tenant.default_tenant.id
    certificate {
        certificate = file("${path.module}/www.sddc.vmconaws.link.crt")
        }
    key = file("${path.module}/www.sddc.vmconaws.link.key")
    type= "SSL_CERTIFICATE_TYPE_VIRTUALSERVICE"
}

resource "avi_virtualservice" "gslb_site01_vs01" {
  name = "gslb_site01_vs01"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  pool_group_ref = avi_poolgroup.site01_pg-1.id
  vsvip_ref  = avi_vsvip.site01_vs01_vip.id
  application_profile_ref = data.avi_applicationprofile.site01_system_https_profile.id
  services {
        port = 443
        enable_ssl = true
        port_range_end = 443
        }
  cloud_type                   = "CLOUD_VCENTER"
  ssl_key_and_certificate_refs = [avi_sslkeyandcertificate.site01_cert1000.id]
  ssl_profile_ref = avi_sslprofile.site01_sslprofile.id
}

resource "avi_healthmonitor" "site01_hm_1" {
  name = "site01_monitor"
  type = "HEALTH_MONITOR_HTTP"
  tenant_ref = data.avi_tenant.default_tenant.id
  receive_timeout = "4"
  is_federated = false
  failed_checks = "3"
  send_interval = "10"
  http_monitor {
        exact_http_request = false
        http_request = "HEAD / HTTP/1.0"
        http_response_code = ["HTTP_2XX","HTTP_3XX","HTTP_4XX"]
        }
  successful_checks = "3"
}

resource "avi_pool" "site01_pool-1" {
  name = "site01_pool-1"
  health_monitor_refs = [avi_healthmonitor.site01_hm_1.id]
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site01_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  lb_algorithm = "LB_ALGORITHM_LEAST_CONNECTIONS"
}

resource "avi_pool" "site01_pool-2" {
  name = "site01_pool-2"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref = data.avi_cloud.default_cloud.id
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site01_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  ignore_servers = true
}

resource "avi_poolgroup" "site01_pg-1" {
  name = "site01_pg-1"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref = data.avi_cloud.default_cloud.id
  members {
    pool_ref = avi_pool.site01_pool-1.id
    ratio = 100
    deployment_state = "IN_SERVICE"
  }
  members {
    pool_ref = avi_pool.site01_pool-2.id
    ratio = 0
    deployment_state = "OUT_OF_SERVICE"
  }
}

resource "avi_server" "site01_server_web11" {
  ip       = var.avi_site01_server_web11
  port     = "80"
  pool_ref = avi_pool.site01_pool-1.id
  hostname = "server_web11"
}

resource "avi_server" "site01_server_web12" {
  ip       = var.avi_site01_server_web12
  port     = "80"
  pool_ref = avi_pool.site01_pool-1.id
  hostname = "server_web12"
}

resource "avi_server" "site01_server_web13" {
  ip       = var.avi_site01_server_web13
  port     = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_webv13"
}

resource "avi_server" "site01_server_web14" {
  ip       = var.avi_site01_server_web14
  port     = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_web14"
}

resource "avi_server" "site01_server_web15" {
  ip = var.avi_site01_server_web15
  port = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_web15"
}

### End of Site01 setup ###
### Start of Site02 setup ###
resource "avi_sslprofile" "site02_sslprofile" {
    provider = avi.site02
    name = "site02_sslprofile"
    ssl_session_timeout = 86400
    tenant_ref = data.avi_tenant.default_tenant.id
    accepted_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA"
    prefer_client_cipher_ordering = false
    enable_ssl_session_reuse = true
    accepted_versions {
      type = "SSL_VERSION_TLS1_1"
    }
    accepted_versions {
      type = "SSL_VERSION_TLS1_2"
    }
    cipher_enums = [
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"]
    send_close_notify = true
    type = "SSL_PROFILE_TYPE_APPLICATION"
    enable_early_data = false
    ssl_rating {
      compatibility_rating = "SSL_SCORE_EXCELLENT"
      security_score = 100.0
      performance_rating = "SSL_SCORE_EXCELLENT"
    }
  }


resource "avi_applicationpersistenceprofile" "site02_applicationpersistenceprofile" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name  = "site02_app-pers-profile"
  is_federated = false
  persistence_type = "PERSISTENCE_TYPE_HTTP_COOKIE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_vsvip" "site02_vs01_vip" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_vs01_vip"
  vip {
    vip_id = "0"
    ip_address {
      type = "V4"
      addr = var.gslb_site02_vs01_vip
    }
  }
}

resource "avi_sslkeyandcertificate" "site02_cert1000" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_cert1000"
  certificate {
      certificate = file("${path.module}/www.sddc.vmconaws.link.crt")
      }
  key = file("${path.module}/www.sddc.vmconaws.link.key")
  type= "SSL_CERTIFICATE_TYPE_VIRTUALSERVICE"
}

resource "avi_virtualservice" "gslb_site02_vs01" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "gslb_site02_vs01"
  pool_group_ref = avi_poolgroup.site02_pg-1.id
  vsvip_ref  = avi_vsvip.site02_vs01_vip.id
  application_profile_ref = data.avi_applicationprofile.site02_system_https_profile.id
  services {
        port = 443
        enable_ssl = true
        port_range_end = 443
        }
  cloud_type = "CLOUD_VCENTER"
  ssl_key_and_certificate_refs = [avi_sslkeyandcertificate.site02_cert1000.id]
  ssl_profile_ref = avi_sslprofile.site02_sslprofile.id
}

resource "avi_healthmonitor" "site02_hm_1" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_monitor"
  type  = "HEALTH_MONITOR_HTTP"
  receive_timeout = "4"
  is_federated = false
  failed_checks = "3"
  send_interval = "10"
  http_monitor {
        exact_http_request = false
        http_request = "HEAD / HTTP/1.0"
        http_response_code = ["HTTP_2XX","HTTP_3XX","HTTP_4XX"]
        }
  successful_checks = "3"
}

resource "avi_pool" "site02_pool-1" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pool-1"
  health_monitor_refs = [avi_healthmonitor.site02_hm_1.id]
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site02_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  lb_algorithm = "LB_ALGORITHM_LEAST_CONNECTIONS"
}

resource "avi_pool" "site02_pool-2" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pool-2"
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site02_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  ignore_servers = true
}

resource "avi_poolgroup" "site02_pg-1" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pg-1"
  members {
    pool_ref = avi_pool.site02_pool-1.id
    ratio = 100
    deployment_state = "IN_SERVICE"
  }
  members {
    pool_ref = avi_pool.site02_pool-2.id
    ratio = 0
    deployment_state = "OUT_OF_SERVICE"
  }
}

resource "avi_server" "site02_server_web21" {
  provider = avi.site02
  ip = var.avi_site02_server_web21
  port = "80"
  pool_ref = avi_pool.site02_pool-1.id
  hostname = "serverp_web21"
}

resource "avi_server" "site02_server_web22" {
  provider = avi.site02
  ip = var.avi_site02_server_web22
  port = "80"
  pool_ref = avi_pool.site02_pool-1.id
  hostname = "server_web22"
}


resource "avi_server" "site02_server_web23" {
  provider = avi.site02
  ip = var.avi_site02_server_web23
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web23"
}

resource "avi_server" "site02_server_web24" {
  provider = avi.site02
  ip = var.avi_site02_server_web24
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web24"
}

resource "avi_server" "site02_server_web25" {
  provider = avi.site02
  ip = var.avi_site02_server_web25
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web25"
}

### END of Site02 Setting ###

### Start of GSLB setup ###

# Only one federated PKI Profile is required for one site or DC
resource "avi_pkiprofile" "terraform_gslb_pki" {
    name = "terraform_gslb_pki"
    tenant_ref = data.avi_tenant.default_tenant.id
    crl_check = false
    is_federated = true
    ignore_peer_chain = false
    validate_only_leaf_crl = true
    ca_certs {
      certificate = file("${path.module}/ca-bundle.crt")
    }
}

resource "avi_applicationpersistenceprofile" "terraform_gslbsite_pesistence" {
  name = "terraform_gslbsite_pesistence"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = true
  persistence_type = "PERSISTENCE_TYPE_GSLB_SITE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_healthmonitor" "terraform_gslbsite_hm01" {
  name = "terraform_gslbsite_hm01"
  type = "HEALTH_MONITOR_PING"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = true
  failed_checks = "3"
  send_interval = "10"
  successful_checks = "3"
}

resource "avi_gslbservice" "terraform_gslb-01" {
  name = "terraform_gslb-01"
  tenant_ref = data.avi_tenant.default_tenant.id
  domain_names = [var.gslb_dns]
  depends_on = [
    avi_pkiprofile.terraform_gslb_pki
  ]
  wildcard_match = false
  application_persistence_profile_ref = avi_applicationpersistenceprofile.terraform_gslbsite_pesistence.id
  health_monitor_refs = [avi_healthmonitor.terraform_gslbsite_hm01.id]
  site_persistence_enabled = true
  is_federated = false
  use_edns_client_subnet= true
  enabled = true
  groups { 
      priority = 10
      consistent_hash_mask=31
      consistent_hash_mask6=31
      members {
        ip {
           type = "V4"
           addr = var.gslb_site01_vs01_vip
        }
        vs_uuid = avi_virtualservice.gslb_site01_vs01.uuid
        cluster_uuid = element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid, index(data.avi_gslb.gslb_demo.sites.*.name,var.site01_name))
        ratio = 1
        enabled = true
      }
     members {
        ip {
           type = "V4"
           addr = var.gslb_site02_vs01_vip
        }
        vs_uuid = avi_virtualservice.gslb_site02_vs01.uuid
        cluster_uuid = element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid, index(data.avi_gslb.gslb_demo.sites.*.name,var.site02_name))
        ratio = 1
        enabled = true
      }
      name = "${var.gslb_dns}-pool"
      algorithm = "GSLB_ALGORITHM_ROUND_ROBIN"      
    }
}
### Output ###
output "gslb-site01_site_number" {
  value = "${index(data.avi_gslb.gslb_demo.sites.*.name,var.site01_name)}"
  description = "gslb-site01_site_number"
}

output "gslb-site02_site_number" {
  value = "${index(data.avi_gslb.gslb_demo.sites.*.name,var.site02_name)}"
  description = "gslb-site02_site_number"
}

output "gslb_site01" {
  value = "${element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid,0)}"
  description = "gslb_site01"
}

output "gslb_site02" {
  value = "${element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid,1)}"
  description = "gslb_site02"
}

output "gslb_service" {
  value = avi_gslbservice.terraform_gslb-01.groups
  description = "gslb_service"
}

output "site01_vs01" {
  value = avi_virtualservice.gslb_site01_vs01
  description = "site01_vs01"
}

output "site02_vs01" {
  value = avi_virtualservice.gslb_site02_vs01
  description = "site02_vs01"
}

Let’s apply the plan and then we can take it easy and enjoy the day.

zhangda@zhangda-a01 automation % terraform apply --auto-approve
data.avi_virtualservice.site01_vs01: Refreshing state...
data.avi_tenant.site02_default_tenant: Refreshing state...
data.avi_gslb.gslb_demo: Refreshing state...
data.avi_virtualservice.site02_vs01: Refreshing state...
data.avi_cloud.site02_default_cloud: Refreshing state...
data.avi_tenant.default_tenant: Refreshing state...
data.avi_cloud.default_cloud: Refreshing state...
data.avi_applicationprofile.site02_system_https_profile: Refreshing state...
data.avi_applicationprofile.site01_system_https_profile: Refreshing state...
data.avi_serviceenginegroup.se_group: Refreshing state...
avi_applicationpersistenceprofile.site02_applicationpersistenceprofile: Creating...
avi_healthmonitor.site02_hm_1: Creating...
avi_sslkeyandcertificate.site02_cert1000: Creating...
avi_vsvip.site02_vs01_vip: Creating...
avi_sslprofile.site02_sslprofile: Creating...
avi_applicationpersistenceprofile.terraform_gslbsite_pesistence: Creating...
avi_healthmonitor.site01_hm_1: Creating...
avi_healthmonitor.terraform_gslbsite_hm01: Creating...
avi_vsvip.site01_vs01_vip: Creating...
avi_pkiprofile.terraform_gslb_pki: Creating...
avi_healthmonitor.site02_hm_1: Creation complete after 1s [id=https://10.1.1.170/api/healthmonitor/healthmonitor-f05a117d-93fe-4a35-b442-391bc815ff8d]
avi_sslprofile.site01_sslprofile: Creating...
avi_applicationpersistenceprofile.site02_applicationpersistenceprofile: Creation complete after 1s [id=https://10.1.1.170/api/applicationpersistenceprofile/applicationpersistenceprofile-2cd82839-0b86-4a25-a212-694c3b8b41b9]
avi_applicationpersistenceprofile.site01_applicationpersistenceprofile: Creating...
avi_sslprofile.site02_sslprofile: Creation complete after 2s [id=https://10.1.1.170/api/sslprofile/sslprofile-fa44f77c-dfe0-494a-902b-e724980d139e]
avi_sslkeyandcertificate.site01_cert1000: Creating...
avi_vsvip.site02_vs01_vip: Creation complete after 2s [id=https://10.1.1.170/api/vsvip/vsvip-2391e848-1b49-4383-ab7a-b2829c6c5406]
avi_pool.site02_pool-1: Creating...
avi_sslkeyandcertificate.site02_cert1000: Creation complete after 2s [id=https://10.1.1.170/api/sslkeyandcertificate/sslkeyandcertificate-90baec49-afa0-4ef3-974d-7357fef77e0d]
avi_pool.site02_pool-2: Creating...
avi_applicationpersistenceprofile.site01_applicationpersistenceprofile: Creation complete after 1s [id=https://10.1.1.250/api/applicationpersistenceprofile/applicationpersistenceprofile-f45f0852-2515-4528-ae65-c48a670ca7ac]
avi_pool.site01_pool-2: Creating...
avi_pool.site02_pool-1: Creation complete after 0s [id=https://10.1.1.170/api/pool/pool-859248df-8ea6-4a00-a8ea-976cc31175a9]
avi_server.site02_server_web21: Creating...
avi_applicationpersistenceprofile.terraform_gslbsite_pesistence: Creation complete after 3s [id=https://10.1.1.250/api/applicationpersistenceprofile/applicationpersistenceprofile-cf887192-0d57-4b91-a7cb-37d787f9aeb2]
avi_server.site02_server_web22: Creating...
avi_sslprofile.site01_sslprofile: Creation complete after 2s [id=https://10.1.1.250/api/sslprofile/sslprofile-1464ded3-7a10-4e76-bfc3-0cdb186ff248]
avi_server.site02_server_web22: Creation complete after 0s [id=pool-859248df-8ea6-4a00-a8ea-976cc31175a9:192.168.202.20:80]
avi_healthmonitor.terraform_gslbsite_hm01: Creation complete after 4s [id=https://10.1.1.250/api/healthmonitor/healthmonitor-003f5015-2a2a-4e65-aff3-1071365a8428]
avi_healthmonitor.site01_hm_1: Creation complete after 4s [id=https://10.1.1.250/api/healthmonitor/healthmonitor-dacd7a40-dc90-4e67-932f-34e94a550fb8]
avi_pool.site01_pool-1: Creating...
avi_vsvip.site01_vs01_vip: Creation complete after 4s [id=https://10.1.1.250/api/vsvip/vsvip-16b0ba87-2703-4fb2-abab-9a8b0bf34ae0]
avi_pool.site02_pool-2: Creation complete after 2s [id=https://10.1.1.170/api/pool/pool-9ca21978-59d5-455f-ba78-01fb9c747b43]
avi_pool.site01_pool-2: Creation complete after 2s [id=https://10.1.1.250/api/pool/pool-47d64222-46b7-4402-ae38-afd47f3f5272]
avi_server.site02_server_web24: Creating...
avi_server.site02_server_web25: Creating...
avi_server.site02_server_web23: Creating...
avi_pool.site01_pool-1: Creation complete after 0s [id=https://10.1.1.250/api/pool/pool-e3c37b13-0950-4320-a643-afa5d3177624]
avi_poolgroup.site02_pg-1: Creating...
avi_server.site01_server_web14: Creating...
avi_server.site01_server_web15: Creating...
avi_server.site01_server_web13: Creating...
avi_poolgroup.site02_pg-1: Creation complete after 1s [id=https://10.1.1.170/api/poolgroup/poolgroup-4197b0b4-d486-455e-8583-bff1fc173fb8]
avi_server.site02_server_web23: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.30:80]
avi_poolgroup.site01_pg-1: Creating...
avi_server.site01_server_web11: Creating...
avi_server.site02_server_web21: Creation complete after 3s [id=pool-859248df-8ea6-4a00-a8ea-976cc31175a9:192.168.202.10:80]
avi_server.site01_server_web12: Creating...
avi_server.site02_server_web25: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.50:80]
avi_virtualservice.gslb_site02_vs01: Creating...
avi_server.site01_server_web13: Creation complete after 1s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.30:80]
avi_server.site02_server_web24: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.40:80]
avi_server.site01_server_web14: Creation complete after 1s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.40:80]
avi_sslkeyandcertificate.site01_cert1000: Creation complete after 3s [id=https://10.1.1.250/api/sslkeyandcertificate/sslkeyandcertificate-1963b9c2-7402-4d32-88f7-b8b57d7bf1e5]
avi_virtualservice.gslb_site02_vs01: Creation complete after 0s [id=https://10.1.1.170/api/virtualservice/virtualservice-310ba2ed-f48f-4a0d-a29e-71a2b9dd2567]
avi_poolgroup.site01_pg-1: Creation complete after 0s [id=https://10.1.1.250/api/poolgroup/poolgroup-21284b51-1f7d-41e3-83c3-078800fdea1d]
avi_virtualservice.gslb_site01_vs01: Creating...
avi_server.site01_server_web15: Creation complete after 2s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.50:80]
avi_server.site01_server_web11: Creation complete after 1s [id=pool-e3c37b13-0950-4320-a643-afa5d3177624:192.168.101.10:80]
avi_server.site01_server_web12: Creation complete after 1s [id=pool-e3c37b13-0950-4320-a643-afa5d3177624:192.168.101.20:80]
avi_virtualservice.gslb_site01_vs01: Creation complete after 1s [id=https://10.1.1.250/api/virtualservice/virtualservice-fbecfed3-2397-4df8-9b76-659f50fcc5f8]
avi_pkiprofile.terraform_gslb_pki: Still creating... [10s elapsed]
avi_pkiprofile.terraform_gslb_pki: Creation complete after 11s [id=https://10.1.1.250/api/pkiprofile/pkiprofile-4333ded8-6ec5-43d0-a677-d68a632bc523]
avi_gslbservice.terraform_gslb-01: Creating...
avi_gslbservice.terraform_gslb-01: Creation complete after 2s [id=https://10.1.1.250/api/gslbservice/gslbservice-38f887ef-87ed-446d-a66f-83d42da39289]

Apply complete! Resources: 32 added, 0 changed, 0 destroyed.

This is the end of this blog. Thank you for reading!😀

Setting Up L2VPN in VMC on AWS

In VMC on AWS SDDC, you can extend your on-premise network to VMC SDDC via HCX or L2VPN.

In this blog, I will show you how to set up L2VPN in VMC on AWS to extend network VLAN 100 to SDDC.

This blog is for VMC SDDC, running at version 1.9, which is backed by NSX-T 2.5. The SDDC end will work as a L2VPN server and your on-premise NSX autonomous edge will work as a L2VPN client.

Prerequisite

  • UDP 500/4500 and ESP (IP Protocol) are allowed from the On-premise L2VPN client to the VMC SDDC L2VPN Server

Let’s start the setup from the VMC SDDC end.

Section 1: Set up L2VPN at VMC SDDC End

Step 1: Log in to your VMC Console, go to Networking & Security—>Network—>VPN—>Lay2 and click “ADD VPN TUNNEL”.

Select Public IP from the local IP Address drop-down and input the public IP of L2VPN’s remote end. As on-premise NSX Edge is behind a NATed device, the remote private IP is required. In my case, the remote private IP is 10.1.1.240.

Step 2: Create an extended network.

Go to Network—>Segment and add a new segment as below.

  • Segment Name: l2vpn;
  • Connectivity: Extended;
  • VPN Tunnel ID: 100 (please note that the tunnel ID needs to match the on-prem tunnel ID)

After the network segment is created, you will see the below in layer 2 VPN.

Now we can begin to download the AUTONOMOUS EDGE from the highlighted hyperlink above.

While the file is downloading, we can download the peer-code which will be used for authentication between on-premise L2VPN client and SDDC L2VPN server.

The downloaded config is similar to below:

[{"transport_tunnel_path":"/infra/tier-0s/vmc/locale-services/default/ipsec-vpn-services/default/sessions/7998a0c0-52b7-11ea-b949-d95049696f90","peer_code":"MCxiNmY2NTg1LHsic2l0ZU5hbWUiOiJMMlZQTiIsInNyY1RhcElwIjoiMTY5LjI1NC4yMC4yIiwiZHxxxxxxxxxxxxxxxxxGgxNCIsImVuY3J5cHRBbmREaWdlc3QiOiJhZXMtZ2NtL3NoYS0yNTYiLCJwc2siOiJOb25lIiwidHVubmVscyI6W3sibG9jYWxJZCI6IjEwLjEuMS4yNDAiLCJwZWVySWQiOiI1Mi4zMy4xMjAuMTk4IiwibG9jYWxWdGlJcCI6IjE2OS4yNTQuMzEuMjU0LzMwIn1dfQ=="}]

Section 2: Deploy and Setup On-premise NSX autonomous edge

Step 1: Prepare Port Groups.

Create 4 port-groups for NSX autonomous Edge.

  • pg-uplink (no vlan tagging)
  • pg-mgmt
  • pg-trunk01 (trunk)
  • pg-ha

We need to change the trunk port-group pg-trunk01 security setting to accept promiscuous mode and forged transmits. This is required for L2VPN.

Step 2: Deploy NSX Autonomous Edge

We follow the standard process to deploy an OVF template from your vCenter. In “Select Network” of the “Deploy OVF Template” wizard, map the right port-group to different networks. Please note Network 0 is always the management network port for the NSX autonomous edge. To make it simpler, I only deployed a single edge here.

The table below shows the interface/network/adapter mapping relationship in different systems/UI under my setup.

Edge CLIEdge VM vNICOVF TemplateEdge GUIPurpose
eth0Network Adapter1Network 0ManagementManagement
fp-eth0Network Adapter2Network 1eth1Uplink
fp-eth1Network Adapter3Network 2eth2Trunk
fp-eth2Network Adapter4Network 3eth3HA

In the “Customize template” section, provide the password for the root, admin and auditor.

Input hostame(l2vpnclient), management IP (10.1.1.241), gateway (10.1.1.1) and network mask (255.255.255.0).

Input DNS and NTP setting:

Provide the input for external port:

  • Port: 0,eth1,10.1.1.240,24.
    • VLAN 0 means no VLAN tagging for this port.
    • eth1 means that the external port will be attached to eth1 which is network 1/pg-uplink port group.
    • IP address: 10.1.1.240
    • Prefix length: 24

There is no need to set up the Internal Port for the autonomous edge deployment. So I left it as blank.

Step 3: Autonomous Edge Setup

After the edge is deployed and powered on, you can log in to the edge UI via https://10.1.1.241.

Go to L2VPN and add a L2VPN session, input the Local IP (10.1.1.240), Remote IP (SDDC public IP) and Peer Code which I got from the downloaded config in section 1.

Go to Port and add port:

  • Port Name: vlan100
  • Subnet: leave as blank
  • VLAN: 100
  • Exit Interface: eth2 (Note: eth2 is connected to the port-group pg-trunk01).

Then go back to L2VPN and attach the newly created port VLAN100 to the L2VPN session as below. Please note that the Tunnel ID is 100, which is the same tunnel ID as the SDDC end.

After the port is attached successfully, we will see something similar to below.

This is the end of this blog. Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Authentication with Azure AD

The Federated Identity feature of VMware Cloud on AWS can be integrated with Microsoft Azure AD as well. In this integration model, the customer dedicated vIDM tenant will work as the SAML Service Provider and the Azure AD will work as the IdP.

Disclaimer:

The Azure AD settings in this blog are to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirements.

Note: please complete the vIDM connector installation and the vIDM tenant basic setup as per my first blog of this series (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/) before continuing.

The AD domain name for this blog is davidwzhang.com.

Prerequisite

  • The on-prem domain has been added as a custom domain under your default Azure AD.
  • Azure AD PREMIUM (P1 or P2) feature is enabled.
  • On-prem AD users/groups are synced with Azure AD.

Step 1: Add Azure AD as an IdP

First, login to your Azure Portal https://portal.azure.com and select Azure Active Directory.

Find “Enterprise Applications” in the list under my default Azure Active Directory (vmconaws) and then create a “New Application”. In the “Add your own app” window, select “Non-gallery application” and input the application name “csp-vidm” and click the “Add” button as below.

In the popped up “Enterprise Application” window, select “Set up single sign-on” under “Getting Started”.

In the pop-up “Select a single sign-on method” window, select SAML.

Next, click the Download hyperlink to download the Azure AD federation metadata XML file or copy the App Federation Metadata URL.

Now we have to pause here. You may have noticed that we still have two pending SAML configuration items: Identifier (Entity ID) and Reply URL (Assertion Consumer Service URL). We will come back to complete these items once we get the required SAML Service Provider metadata from the vIDM tenant.

Step 2: Create an Identity Provider in vIDM Tenant

Go to the vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.

Input the parameters as below:

  • Identity Provider Name: AzureAD
  • SAML AuthN Request Binding: HTTP Redirect

In the SAML Metadata section, copy the content of downloaded Azure AD federation metadata XML file to the text box of Identity Provider Metadata(URL or XML) then click the button “Process IdP Metadata”.

Next, add two Name ID format mappings as the below:

  • urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress = emails
  • urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified = uerName

Then from the dropdown of the “Name ID policy in SAML request (Optional)”, select “urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress”

The rest of the settings are as below:

  • Just-in-Time User Provisioning: Disabled
  • Users: davidwzhang.com
  • Network: ALL Ranges
  • Authentication Methods
    • Authentication Methods: AzureADPassword
    • SAML Context: urn:oasis:names:tc:SAML:2.0:ac:classes:Password 
  • Single Sign-Out Configuration: Disabled

Then click the hyperlink of “Service Provider (SP) Metadata” to open the SAML Service Provider metadata in a browser. Then you can find the Identifier (Entity ID) and Reply URL (Assertion Consumer Service URL) which are required to complete the Azure AD IdP SAML setup.

Now, click the “Save” button to save the IdP setup.

Step 3: Add SAML Information for Azure AD

Go back to Azure Portal and continue the Single Sign-on SAML setup.

Edit the basic SAML configuration.

Copy the Identifier (Entity ID) and Reply URL (Assertion Consumer Service URL) which we got in the last step, to the corresponding text box and save the configuration.

Next, add user groups which are granted access to the VMware Cloud service console to this newly created application. Here two groups (sddc-admins and sddc-operators) are added.

Now we have finished the configuration of Azure AD IdP.

Step 4: Update Authentication Policy

Update the vIDM tenant’s default access policy to include AzureADPassword as the first authentication method.

Now, when you try to log into the VMware Cloud service console with your AD account, your login session will be redirected to the Azure AD as below, which authenticates your session.

Setting Up Federated Identity Management for VMC on AWS – Authentication with ADFS

The Federated Identity feature of VMware Cloud on AWS can be integrated with Microsoft Active Directory Federation Services (ADFS). In this integration model, the customer dedicated vIDM tenant will work as the SAML Service Provider and the ADFS will work as the IdP.

Disclaimer:

The ADFS settings in this blog are to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirements.

Note: please complete the vIDM connector installation and the vIDM tenant basic setup as per my first blog of this series (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/) before continuing.

The AD domain name for this blog is davidwzhang.com.

The ADFS is installed on a Windows 2016 standard server. The DNS name for ADFS service is adfs1.davidwzhang.com.

Step 1: Get the ADFS Metadata Information.

The URL to this ADFS metadata is https://adfs1.davidwzhang.com/FederationMetadata/2007-06/FederationMetadata.xml. Click the URL to download the metadata XML file.

Step 2: Create an Identity Provider in vIDM Tenant

Go to the vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.

Input the parameters as below:

  • Identity Provider Name: ADFS01
  • SAML AuthN Request Binding: HTTP Redirect

In the SAML Metadata section, copy the content of XML file FederationMetadata.xml to the text box of the Identity Provider Metadata (URL or XML) then click the button “Process IdP Metadata”. The Name ID Format section will have 3 entries popped up.

Now, click the green plus icon to add a new mapping as the below:

urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified – userName.

Then from the dropdown of the “Name ID policy in SAML request (Optional)”, select “urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified”

The rest of the settings are as below:

  • Just-in-Time User Provisioning: Disabled
  • Users: davidwzhang.com
  • Network: ALL Ranges
  • Authentication Methods
    • Authentication Methods: ADFS Password
    • SAML Context: urn:oasis:names:tc:SAML:2.0:ac:classes:Password 
  • Single Sign-Out Configuration: Disabled

Then click the hyperlink of “Service Provider (SP) Metadata” to download the Service Provider metadata and upload it to the ADFS server.

Step 3: Update Authentication Policy

Update the vIDM tenant’s default access policy to include ADFS as the first authentication method.

Step 4: Configure ADFS Relying Party

Now it is time to setup the ADFS. First, start the ADFS Management tool from Server Manager and add a Relying Party Trust as the below.

In the welcome screen of “Add Relying Party Trust” wizard, select “Claim aware” and click the Start button.

Select the uploaded sp.xml file in step 2 to import data about the relying party and then click the Next button.

Input “vmccsa-demo” as the relying party display name, then click the Next button.

Choose “Permit everyone” as the Access Control Policy and click the Next button. Note: please select different access control policies for your use case.

Click the Next button.

Now, click the Close button to finish the wizard.

Step 5: Create ADFS Relying Party Claim Rules

Select the newly created relying party, right click and select “Edit Claim Issue Policy…”.

Click the “Add Rule…” button to add a new Transform rule.

In the “Choose Rule Type” step of the “Add Transform Claim Rule” wizard, select “Select LDAP Attributes as Claims” and click the Next button.

Input the following parameters:

  • Claim rule name: Get Attribute Email Address
  • Attribute store: Active Directory
  • Mapping of LDAP attributes to outgoing claim types:
    • LDAP Attribute: E-Mail-Addresses
    • Outgoing Claim Type: E-Mail Addresses

Click the Finish button. Then add the second new rule. The new rule will use “Send Claims Using a Custom Rule” as its rule template.

Next, input the name and custom rule:

The content of the “Custom rule” is as below. Please replace “xxxxx.vidmreview.com” with your own vIDM tenant URL.

c:[Type == "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"]
 => issue(Type = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier", Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType, Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/format"] = "urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified", Properties["http://schemas.xmlsoap.org/ws/2005/05/identity/claimproperties/spnamequalifier"] = "xxxxx.vidmreview.com");

Now, when you try to log into the VMware Cloud service console with your AD account, your login session will be redirected to the ADFS as below, which authenticates your session.

Setting Up Federated Identity Management for VMC on AWS – Authentication with Okta IdP

The Federated Identity feature of VMware Cloud on AWS can be integrated with all 3rd party IdPs who support SAML version 2.0. In this integration model, the customer dedicated vIDM tenant will work as SAML Service Provider. If the 3rd party IdP is set up to perform multi-factor authentication (MFA), the customer will be prompted MFA for access to VMware Cloud services. In this blog, the integration with one of the most popular IdP Okta will be demoed.

Disclaimer:

The Okta IdP settings in this blog are to demo the integration for vIDM, which may not be the best practise for your environment or meet your business and security requirements.

Note: please complete the vIDM connector installation and the vIDM tenant basic setup as per my first blog of this series (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/) before continuing.

To add the same users and user groups in Okta IdP as the configured vIDM tenant, we need to integrate Okta with corporate Active Directory (AD). The integration is via Okta’s lightweight agent.

Click the “Directory Integration” in Okta UI.

Click “Add Active Directory”.

The Active Directory integration setup wizard will start and click “Set Up Active Directory”.

Download the agent as required in the below window.

This agent can be installed on a Windows Server 2008 R2 or later. The installation of this Okta agent is quite straightforward. Once the agent installation is completed, you need to perform the setup of this AD integration. In the basic setting window, select the Organizational Units (OUs) that you’d like to sync users or groups from and make sure that “Okta username format” is set up to use User Principle Name (UPN).

In the “Build User Profile” window, select any custom schema which needs to be included in the Okta user profile and click Next.

Click Done to finish the integration setup.

The Okta directory setting window will pop up.

Enable the Just-In-Time provisioning and set the Schedule Import to perform user import every hour. Review and save the setting.

Now go to the Import tab and click “Import Now” to import the users from corporate AD.

As it is the first time to import user/users from customer AD, select “Full Import” and click Import.

When the scan is finished, Okta will report the result. Click OK.

Select the user/users to be imported and confirm the user assignment. Note: the user jsmith@lab.local is imported here, who will be used for the final integration testing.

Now it is time to set up the SAML IdP in Okta.

Go to Okta Classic UI application tab and click “Add Application”

Click “Create New App”;

Select Web as the Platform and “SAML 2.0” for Sign on method and click Create;

Type in App name, “csp-vidm” is used as an example as the app name and click Next;

There are two configuration items in the popped up “Create SAML Integration” window which is mandatory. These information can be copied from Identity Provider setting within vIDM tenant.

Go to vIDM tenant administrator console and click “Add Identity Provider” and select “Create Third Party IDP” within the “Identity & Access Management” tab.

Type in the “Identity Provider Name”, here the example name is “Okta01”

Go to the bottom of this IdP creation window and click “Service Provider (SP) Metadata”.

A new window will pop up as the below:

The entity ID and HTTP-POST location are required information for Okta IdP SAML setting. Copy the entity ID URL link into the “Audience URI (SP Entity ID) and HTTP-POST location into “Single sign on URL” in the Okta “Create SAML Integration” window.

Leave all other configuration items as the default and click Next;

In the Feedback window, suggest the newly created app is an internal app and click Finish.

A “Sign On settings” window will pop up as below, click “Identity Provider metadata” link.

The XML file format of Identity Provider metadata shows up. Select all content of this XML file and copy.

Paste the Okta IdP metadata into SAML Metadata and click “Process IdP Metadata” in the vIDM 3rd party identity provider creation window.

The “SAML AuthN Request Binding” and “Name ID format mapping from SAML Response” will be updated as below:

Select “lab.local” directory as users who can authenticate with this new 3rd party IdP and leave the Network as default “All RANGES”. Then create a new authentication method called “Okta Auth” with SAML Context “urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtected“. Please note that the name of this newly created authentication method has to be different from any existing authentication method.

Then leave all other configuration items’ box unchecked and click Add.

The 3rd party IdP has been successfully added now.

The last step of vIDM set up for this Okta integration is updating the default access policy to use the newly defined authentication method “Okta Auth”. Please follow up the steps in my previous blog (https://wordpress.com/block-editor/post/davidwzhang.com/308) to perform the required update. The updated default access policy should be similar as below.

Before going to test the setup, go to Okta UI to assign user/s to the newly defined SAML 2.0 web application “csp-vidm”. Click Assignment.

Click Assign and select “Assign to People”.

In the “Assign csp-vidm to People” window, assign user John Smith (jsmith@lab.local), which means that the user John Smith is allowed by this SAML 2.0 application.

After the assignment is completed, user John Smith is under the assignment of this SAML 2.0 application “csp-vidm”.

Instead of assigning individual users, AD group/groups can be assigned to the SAML application as well.

Finally, everything is ready to test the integration.

Open a new Incognito window in a Chrome browser and type in the vIDM tenant URL then click Enter.

In the log in window, type user name jsmith@lab.local and click Next.

The authentication session is redirected to Okta.

Type in Username & Password and click “Sign In”.

Then John Smith (jsmith@lab.local) successfully logs in the vIDM tenant.

This is the end of this demo. Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Authentication with Active Directory

This blog is the second blog of this Federated Identity Management for VMC on AWS series. Please complete the vIDM connector installation and setup as per my first blog of this series before moving forward. (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-install-and-setup-vidm-connector/)

VMware Cloud on AWS Federated Identity management supports different kinds of authentication methods. This blog will demo the basic method: authentication with the customer corporate Active Directory (AD).

When VMC on AWS customers use AD for authentication, outbound-only connection mode is highly recommended. This mode does not require any inbound firewall port to be opened: only outbound connectivity from vIDM Connector to VMware SaaS vIDM tenant on port 443 is required. All user and group sync from your enterprise directory and user authentication are handled by the vIDM connector.

To enable outbound-only mode, go to update the settings of the Build-in Identity Provider. In the user section of Built-in Identity Provider settings, select the newly created directory “lab.local” and add the newly created connector “vidmcon01.​lab.​local”.

After the connetor is added successfully, select Password (cloud deployment) in the “Connector Authentication Methods” and click Save.

Now it is time to update the access policy to use corporate Active Directory to authenticate VMC users.

Go to Identity & Access Management.

Click “Edit DEFAULT POLICY” then the “Edit Policy” window pop up. Click Next.

Click “ADD POLICY RULE”.

Then the “Add Policy Rule” window will pop up. At this stage, just leave the first two configuration items as default: “ALL RANGES” and “ALL Device Types”. In the “and user belong to group(s)” config item, search and add all 3 synced groups (sddc-admins, sddc-operators and sddc-readonly) to allow the users in these 3 groups to log in.

Add Password(cloud deployment) as authentication method.

Use Password(Local Directory) as fallback authentication method and click Save.

There are 3 rules defined in the default access policy. Drag the newly defined rule to the top of the rules table, which will make sure that the new rule is evaluated first when a user tries to log in.

Now the rules table shows as below. Click Next.

Click Save to keep the changes of the default access policy.

You are now good to test your authentication set up. Open a new Incognito window in your Chrome browser and connect to the vIDM URL. Type in the username (jsmith@lab.local) and click Next.

Type in the Active Directory password for user jsmith@lab.local and click “Sign in”.

Then you can see that jsmith@lab.local has successfully logged in the vIDM!

Thank you very much for reading!

Setting Up Federated Identity Management for VMC on AWS – Install and Setup vIDM Connector

As an enterprise using VMware Cloud Services, you can set up federation with your corporate domain. Federating your corporate domain allows you to use your organization’s single sign-on and identity source to sign in to VMware Cloud Services. You can also set up multi-factor authentication as part of federation access policy settings.

Federated identity management allows you to control authentication to your organization and its services by assigning organization and service roles to your enterprise groups.

Set up a federated identity with the VMware Identity Manager service and the VMware Identity Manager connector, which VMWare provide at no additional charge. The following are the required high-level steps.

  1. Download the VMware Identity Manager (vIDM) connector and configure it for user attributes and group sync from your corporate identity store. Note that only the VMware Identity Manager Connector for Windows is supported.
  2. Configure your corporate identity provider instance using the VMware Identity Manager service.
  3. Register your corporate domain.

This series of blogs will demonstrate how to complete customer end setup of the Federated Identity Management for VMC on AWS.

  1. Install and Setup vIDM connector, which is required for all 4 use cases;
  2. Use Case 1: authenticate the users with On-prem Active Directory; (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-authentication-with-active-directory)
  3. Use Case 2: authenticate the users with third party IDP Okta (https://davidwzhang.com/2019/07/31/setting-up-federated-identity-management-for-vmc-on-aws-authentication-with-okta-idp/)
  4. Use Case 3: authenticate users with Active Directory Federation Services ( https://davidwzhang.com/2019/12/04/setting-up-federated-identity-management-for-vmc-on-aws-authentication-with-adfs/ )
  5. Use Case 4: authenticaate user with Azure AD (https://davidwzhang.com/2019/12/11/setting-up-federated-identity-management-for-vmc-on-aws-authentication-with-azure-ad/)

As the 1st blog of this series, I will show you how to install the vIDM connector (version 19.03) on Windows 2012 R2 server and how we achieve the HA for vIDM connector.

Prerequisite

  • a vIDM SaaS tenant. If you don’t have one, please contact VMware customer success representative.
  • a Window Server (Windows 2008 R2, Windows 2012, Windows 2012 R2 or Windows 2016).
  • Open the firewall rules for communication from Windows Server to domain controllers and vIDM tenant on port 443.
  • vIDM connector for Windows installation package. The latest version of vIDM connector is shown below.

Installation

Log in to the Windows 2012 R2 server and start the installation:

Click Yes in the “User Account Control” window.

Note the installation package will install the latest major JRE version on on the connector windows server if the JRE has not been installed yet.

The installation process is loading the Installation Wizard.

Click Next in the Installation Wizard window.

Accept the License Agreement as below:

Accept the default of installation destination folder and click Next;

Click Next and leave the “Are you migrating your Connector” box unchecked.

Accept the pop-up hostname and default port for this connector.

As the purpose of VMware Cloud federated identity management, please don’t run the Connector service as domain user account. So leave this “Would you like to run the Connector service as a domain user account?” option box unchecked and click Next.

Click Yes in the pop-up window to confirm from the previous step.

Click Install to begin the installation.

Wait for a few minutes, the installation has completed successfully.

Click Finish. A new window will pop up, which suggests the Connector appliance management URL as below .

Click Yes. The browser is opened and will redirect to https://vidmconn01.lab.local:8443. Accept the alert of security certificate and continue to this website.

In the VMware Identity Manager Appliance Setup wizard, click Continue.

Note: Don’t use Internet Explorer when running the wizard. There is a known bug with IE.

Set passwords for appliance application admin account and click Continue.

Now go to the vIDM tenant, in the tab of Identity & Access Management, click Add Connector.

Type in Connector ID Name and Click “Generate Activation Code”.

Copy the generated activation code and go back to the Connector setup wizard.

Copy the activation code into the Activate Connector Window and click Continue.

Wait for a few minutes then the connector will be activated.

Note: sometimes a 404 error will pop up like the below. As my experience, it is a false alert for Windows 2012 R2. Don’t worry about it.

In VMware Identity Manager tenant, the newly installed connector will show up as below:

Setup

Now it is time to set up our connector for user sync.

Step 1: Add Directory

Click Add Directory and select “Add Active Directory over LDAP/IWA”.

Type in “Directory Name”, select “Active Directory over LDAP” and use this directory for user sync and authentication. In the “Directory Search Attribute”, I prefer to use UserPrincipalName than sAMAccountName as the UserPrincipalName option will work for all Federated Identity management use cases, e.g. integration with Active Directory Federation Service and 3rd Party IDP.

Then provide the required Bind User Details and click “Save & Next”

After a few minutes, the domain will pop up. Click Next.

In the Map User Attributes window, accept the setup and click Next

Type in the group DNs and click “Find Groups”.

Click the “0 of 23” under the column “Groups to sync”.

Select 3 user groups which need to be synced and click Save.

Click Next.

Accept the default setting in the “Select the Users you would like to sync” window and click Next.

In the Review window, click “Sync Directory”

Now it is time to verify that the synced users and groups in VIDM tenant. Go to the “User & Groups” tab. You can see we have 10 users and 3 groups that are synced from lab.local directory.

You can find the sync log within the configured directory.

Now the basic set up of vIDM connector has been completed.

Connector HA

A single VMware Identity manager is considered as a single point of failure in an enterprise environment. To achieve the high availability of connectors, just install an extra one or multiple connectors, the installation of an extra connector is exactly same as installing the 1st connector. Here, the second connector is installed on another Windows 2012 R2 server vidmcon02.lab.local. After the installation is completed, the activation procedure of the connector is the same as well.

Now 2 connectors will show up in the vIDM tenant.

Go to the Built-in identity provider and add the second connector.

Type in the Bind User Password and click “Add Connector”

Then the second connector is added successfully.

Now there are 2 connectors associated with the Built-in Identity Provider.

Please note connector HA is only for user authentication in version 19.03. Directory or user sync can only be enabled on one connector at a time. In the event of a connector instance failure, authentication is handled automatically by another connector instance. However, for directory sync, you must modify the directory settings in the VMware Identity Manager service to use another connector instance like the below.

Thank you very much for reading!