Zero Code NSX Advanced LB Automation with Terraform

VMware NSX Advanced Load Balancer (Avi Networks) provides multi-cloud load balancing, web application firewall, application analytics and container ingress services across on-premises data centers and any cloud.

Terraform is a widely adopted Infrastructure as Code tool that allows you to define your infrastructure using a simple, declarative programming language, and deploy and manage infrastructure across public cloud providers including AWS, Azure and Google Cloud. NSX Advanced load balancer (Aka Avi load balancer) are fully supported by Terraform and each Avi REST resource is exposed as a resource in Terraform. By using the Terraform Avi Provider, we can achieve Infrastructure as a code for your load balancing service.

In this blog, I will show you how easy it is to build an LBaaS service (local load balancing and global load balancing across two DCs) for a critical (99.99%+ SLA)web application on NSX advanced load balancer via Terraform in minutes.

My testing environment is set up as below:

  • Two DCs: site01 and site02;
  • There is a controller cluster in each site;
  • Two GSLB sites configured: site01 is the leader site.
  • Terraform v0.12
  • NSX Advanced load balancer: v18.2.9

The Terraform plan will create the following resources:

  • 5 web servers as a pool member in each DC;
  • Two local load balancing pools in each DC: the first 2 web servers are members of pool1 and the rest 3 web servers are members of pool2;
  • A pool group in each DC, which includes the above 2 pools: pool1 is In Service and pool2 is Out of Service
  • A virtual service in each DC to provide local load balancing
  • SSL profile in each DC to define how a SSL session is terminated on the NSX advanced load balancer;
  • HTTP Cookie-based persistence profile in each DC to offer web session persistence in the local load balancing;
  • Certificate and Key for the web application HTTPS service;
  • A HTTP health monitor in each DC to check the health of local load balancing pool members
  • Global load balancing PKI profile;
  • Global load balancing health monitor;
  • Global load balancing persistence profile;
  • Global load balancing service;

Also, a few outputs are defined to suggest the results of the Terraform plan.

You can access main.tf and variables.tf on GitHub here.

# For example, restrict template version in 0.1.x
provider "avi" {
  avi_username = "admin"
  avi_tenant = "admin"
  avi_password = "password"
  avi_controller= var.site1controller
}

provider "avi" {
  avi_username = "admin"
  avi_tenant = "admin"
  alias = "site02"
  avi_password = "password"
  avi_controller= var.site2controller
}

data "avi_tenant" "default_tenant" {
  name = "admin"
}

data "avi_cloud" "default_cloud" {
  name = "Default-Cloud"
}

data "avi_tenant" "site02_default_tenant" {
  provider = avi.site02
  name = "admin"
}

data "avi_cloud" "site02_default_cloud" {
  provider = avi.site02
  name = "Default-Cloud"
}

data "avi_serviceenginegroup" "se_group" {
  name      = "Default-Group"
  cloud_ref = data.avi_cloud.default_cloud.id
}

data "avi_gslb" "gslb_demo" {
  name = "Default"
}

data "avi_virtualservice" "site01_vs01" {
  name = "gslb_site01_vs01"
}

data "avi_virtualservice" "site02_vs01" {
  name = "gslb_site02_vs01"
}

data "avi_applicationprofile" "site01_system_https_profile" {
  name = "System-Secure-HTTP"
}

data "avi_applicationprofile" "site02_system_https_profile" {
  provider = avi.site02
  name = "System-Secure-HTTP"
}

### Start of Site01 setup
resource "avi_sslprofile" "site01_sslprofile" {
    name = "site01_sslprofile"
    ssl_session_timeout = 86400
    tenant_ref = data.avi_tenant.default_tenant.id
    accepted_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA"
    prefer_client_cipher_ordering = false
    enable_ssl_session_reuse = true
    accepted_versions {
      type = "SSL_VERSION_TLS1_1"
    }
    accepted_versions {
      type = "SSL_VERSION_TLS1_2"
    }
    cipher_enums = [
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"]
    send_close_notify = true
    type = "SSL_PROFILE_TYPE_APPLICATION"
    enable_early_data = false
    ssl_rating {
      compatibility_rating = "SSL_SCORE_EXCELLENT"
      security_score = 100.0
      performance_rating = "SSL_SCORE_EXCELLENT"
    }
  }

resource "avi_applicationpersistenceprofile" "site01_applicationpersistenceprofile" {
  name  = "site01_app-pers-profile"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = false
  persistence_type = "PERSISTENCE_TYPE_HTTP_COOKIE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_vsvip" "site01_vs01_vip" {
  name = "site01_vs01_vip"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  vip {
    vip_id = "0"
    ip_address {
      type = "V4"
      addr = var.gslb_site01_vs01_vip
    }
  }
}

resource "avi_sslkeyandcertificate" "site01_cert1000" {
    name = "site01_cert1000"
    tenant_ref = data.avi_tenant.default_tenant.id
    certificate {
        certificate = file("${path.module}/www.sddc.vmconaws.link.crt")
        }
    key = file("${path.module}/www.sddc.vmconaws.link.key")
    type= "SSL_CERTIFICATE_TYPE_VIRTUALSERVICE"
}

resource "avi_virtualservice" "gslb_site01_vs01" {
  name = "gslb_site01_vs01"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  pool_group_ref = avi_poolgroup.site01_pg-1.id
  vsvip_ref  = avi_vsvip.site01_vs01_vip.id
  application_profile_ref = data.avi_applicationprofile.site01_system_https_profile.id
  services {
        port = 443
        enable_ssl = true
        port_range_end = 443
        }
  cloud_type                   = "CLOUD_VCENTER"
  ssl_key_and_certificate_refs = [avi_sslkeyandcertificate.site01_cert1000.id]
  ssl_profile_ref = avi_sslprofile.site01_sslprofile.id
}

resource "avi_healthmonitor" "site01_hm_1" {
  name = "site01_monitor"
  type = "HEALTH_MONITOR_HTTP"
  tenant_ref = data.avi_tenant.default_tenant.id
  receive_timeout = "4"
  is_federated = false
  failed_checks = "3"
  send_interval = "10"
  http_monitor {
        exact_http_request = false
        http_request = "HEAD / HTTP/1.0"
        http_response_code = ["HTTP_2XX","HTTP_3XX","HTTP_4XX"]
        }
  successful_checks = "3"
}

resource "avi_pool" "site01_pool-1" {
  name = "site01_pool-1"
  health_monitor_refs = [avi_healthmonitor.site01_hm_1.id]
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref  = data.avi_cloud.default_cloud.id
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site01_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  lb_algorithm = "LB_ALGORITHM_LEAST_CONNECTIONS"
}

resource "avi_pool" "site01_pool-2" {
  name = "site01_pool-2"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref = data.avi_cloud.default_cloud.id
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site01_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  ignore_servers = true
}

resource "avi_poolgroup" "site01_pg-1" {
  name = "site01_pg-1"
  tenant_ref = data.avi_tenant.default_tenant.id
  cloud_ref = data.avi_cloud.default_cloud.id
  members {
    pool_ref = avi_pool.site01_pool-1.id
    ratio = 100
    deployment_state = "IN_SERVICE"
  }
  members {
    pool_ref = avi_pool.site01_pool-2.id
    ratio = 0
    deployment_state = "OUT_OF_SERVICE"
  }
}

resource "avi_server" "site01_server_web11" {
  ip       = var.avi_site01_server_web11
  port     = "80"
  pool_ref = avi_pool.site01_pool-1.id
  hostname = "server_web11"
}

resource "avi_server" "site01_server_web12" {
  ip       = var.avi_site01_server_web12
  port     = "80"
  pool_ref = avi_pool.site01_pool-1.id
  hostname = "server_web12"
}

resource "avi_server" "site01_server_web13" {
  ip       = var.avi_site01_server_web13
  port     = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_webv13"
}

resource "avi_server" "site01_server_web14" {
  ip       = var.avi_site01_server_web14
  port     = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_web14"
}

resource "avi_server" "site01_server_web15" {
  ip = var.avi_site01_server_web15
  port = "80"
  pool_ref = avi_pool.site01_pool-2.id
  hostname = "server_web15"
}

### End of Site01 setup ###
### Start of Site02 setup ###
resource "avi_sslprofile" "site02_sslprofile" {
    provider = avi.site02
    name = "site02_sslprofile"
    ssl_session_timeout = 86400
    tenant_ref = data.avi_tenant.default_tenant.id
    accepted_ciphers = "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA384:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA"
    prefer_client_cipher_ordering = false
    enable_ssl_session_reuse = true
    accepted_versions {
      type = "SSL_VERSION_TLS1_1"
    }
    accepted_versions {
      type = "SSL_VERSION_TLS1_2"
    }
    cipher_enums = [
      "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
      "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
      "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384"]
    send_close_notify = true
    type = "SSL_PROFILE_TYPE_APPLICATION"
    enable_early_data = false
    ssl_rating {
      compatibility_rating = "SSL_SCORE_EXCELLENT"
      security_score = 100.0
      performance_rating = "SSL_SCORE_EXCELLENT"
    }
  }


resource "avi_applicationpersistenceprofile" "site02_applicationpersistenceprofile" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name  = "site02_app-pers-profile"
  is_federated = false
  persistence_type = "PERSISTENCE_TYPE_HTTP_COOKIE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_vsvip" "site02_vs01_vip" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_vs01_vip"
  vip {
    vip_id = "0"
    ip_address {
      type = "V4"
      addr = var.gslb_site02_vs01_vip
    }
  }
}

resource "avi_sslkeyandcertificate" "site02_cert1000" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_cert1000"
  certificate {
      certificate = file("${path.module}/www.sddc.vmconaws.link.crt")
      }
  key = file("${path.module}/www.sddc.vmconaws.link.key")
  type= "SSL_CERTIFICATE_TYPE_VIRTUALSERVICE"
}

resource "avi_virtualservice" "gslb_site02_vs01" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "gslb_site02_vs01"
  pool_group_ref = avi_poolgroup.site02_pg-1.id
  vsvip_ref  = avi_vsvip.site02_vs01_vip.id
  application_profile_ref = data.avi_applicationprofile.site02_system_https_profile.id
  services {
        port = 443
        enable_ssl = true
        port_range_end = 443
        }
  cloud_type = "CLOUD_VCENTER"
  ssl_key_and_certificate_refs = [avi_sslkeyandcertificate.site02_cert1000.id]
  ssl_profile_ref = avi_sslprofile.site02_sslprofile.id
}

resource "avi_healthmonitor" "site02_hm_1" {
  provider = avi.site02
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_monitor"
  type  = "HEALTH_MONITOR_HTTP"
  receive_timeout = "4"
  is_federated = false
  failed_checks = "3"
  send_interval = "10"
  http_monitor {
        exact_http_request = false
        http_request = "HEAD / HTTP/1.0"
        http_response_code = ["HTTP_2XX","HTTP_3XX","HTTP_4XX"]
        }
  successful_checks = "3"
}

resource "avi_pool" "site02_pool-1" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pool-1"
  health_monitor_refs = [avi_healthmonitor.site02_hm_1.id]
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site02_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  lb_algorithm = "LB_ALGORITHM_LEAST_CONNECTIONS"
}

resource "avi_pool" "site02_pool-2" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pool-2"
  application_persistence_profile_ref = avi_applicationpersistenceprofile.site02_applicationpersistenceprofile.id
  fail_action {
    type = "FAIL_ACTION_CLOSE_CONN"
  }
  ignore_servers = true
}

resource "avi_poolgroup" "site02_pg-1" {
  provider = avi.site02
  cloud_ref = data.avi_cloud.site02_default_cloud.id
  tenant_ref = data.avi_tenant.site02_default_tenant.id
  name = "site02_pg-1"
  members {
    pool_ref = avi_pool.site02_pool-1.id
    ratio = 100
    deployment_state = "IN_SERVICE"
  }
  members {
    pool_ref = avi_pool.site02_pool-2.id
    ratio = 0
    deployment_state = "OUT_OF_SERVICE"
  }
}

resource "avi_server" "site02_server_web21" {
  provider = avi.site02
  ip = var.avi_site02_server_web21
  port = "80"
  pool_ref = avi_pool.site02_pool-1.id
  hostname = "serverp_web21"
}

resource "avi_server" "site02_server_web22" {
  provider = avi.site02
  ip = var.avi_site02_server_web22
  port = "80"
  pool_ref = avi_pool.site02_pool-1.id
  hostname = "server_web22"
}


resource "avi_server" "site02_server_web23" {
  provider = avi.site02
  ip = var.avi_site02_server_web23
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web23"
}

resource "avi_server" "site02_server_web24" {
  provider = avi.site02
  ip = var.avi_site02_server_web24
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web24"
}

resource "avi_server" "site02_server_web25" {
  provider = avi.site02
  ip = var.avi_site02_server_web25
  port = "80"
  pool_ref = avi_pool.site02_pool-2.id
  hostname = "server_web25"
}

### END of Site02 Setting ###

### Start of GSLB setup ###

# Only one federated PKI Profile is required for one site or DC
resource "avi_pkiprofile" "terraform_gslb_pki" {
    name = "terraform_gslb_pki"
    tenant_ref = data.avi_tenant.default_tenant.id
    crl_check = false
    is_federated = true
    ignore_peer_chain = false
    validate_only_leaf_crl = true
    ca_certs {
      certificate = file("${path.module}/ca-bundle.crt")
    }
}

resource "avi_applicationpersistenceprofile" "terraform_gslbsite_pesistence" {
  name = "terraform_gslbsite_pesistence"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = true
  persistence_type = "PERSISTENCE_TYPE_GSLB_SITE"
  http_cookie_persistence_profile {
    cookie_name = "sddc01-vs01-cookie01"
    always_send_cookie = false
    timeout = 15
  }
}

resource "avi_healthmonitor" "terraform_gslbsite_hm01" {
  name = "terraform_gslbsite_hm01"
  type = "HEALTH_MONITOR_PING"
  tenant_ref = data.avi_tenant.default_tenant.id
  is_federated = true
  failed_checks = "3"
  send_interval = "10"
  successful_checks = "3"
}

resource "avi_gslbservice" "terraform_gslb-01" {
  name = "terraform_gslb-01"
  tenant_ref = data.avi_tenant.default_tenant.id
  domain_names = [var.gslb_dns]
  depends_on = [
    avi_pkiprofile.terraform_gslb_pki
  ]
  wildcard_match = false
  application_persistence_profile_ref = avi_applicationpersistenceprofile.terraform_gslbsite_pesistence.id
  health_monitor_refs = [avi_healthmonitor.terraform_gslbsite_hm01.id]
  site_persistence_enabled = true
  is_federated = false
  use_edns_client_subnet= true
  enabled = true
  groups { 
      priority = 10
      consistent_hash_mask=31
      consistent_hash_mask6=31
      members {
        ip {
           type = "V4"
           addr = var.gslb_site01_vs01_vip
        }
        vs_uuid = avi_virtualservice.gslb_site01_vs01.uuid
        cluster_uuid = element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid, index(data.avi_gslb.gslb_demo.sites.*.name,var.site01_name))
        ratio = 1
        enabled = true
      }
     members {
        ip {
           type = "V4"
           addr = var.gslb_site02_vs01_vip
        }
        vs_uuid = avi_virtualservice.gslb_site02_vs01.uuid
        cluster_uuid = element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid, index(data.avi_gslb.gslb_demo.sites.*.name,var.site02_name))
        ratio = 1
        enabled = true
      }
      name = "${var.gslb_dns}-pool"
      algorithm = "GSLB_ALGORITHM_ROUND_ROBIN"      
    }
}
### Output ###
output "gslb-site01_site_number" {
  value = "${index(data.avi_gslb.gslb_demo.sites.*.name,var.site01_name)}"
  description = "gslb-site01_site_number"
}

output "gslb-site02_site_number" {
  value = "${index(data.avi_gslb.gslb_demo.sites.*.name,var.site02_name)}"
  description = "gslb-site02_site_number"
}

output "gslb_site01" {
  value = "${element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid,0)}"
  description = "gslb_site01"
}

output "gslb_site02" {
  value = "${element(data.avi_gslb.gslb_demo.sites.*.cluster_uuid,1)}"
  description = "gslb_site02"
}

output "gslb_service" {
  value = avi_gslbservice.terraform_gslb-01.groups
  description = "gslb_service"
}

output "site01_vs01" {
  value = avi_virtualservice.gslb_site01_vs01
  description = "site01_vs01"
}

output "site02_vs01" {
  value = avi_virtualservice.gslb_site02_vs01
  description = "site02_vs01"
}

Let’s apply the plan and then we can take it easy and enjoy the day.

zhangda@zhangda-a01 automation % terraform apply --auto-approve
data.avi_virtualservice.site01_vs01: Refreshing state...
data.avi_tenant.site02_default_tenant: Refreshing state...
data.avi_gslb.gslb_demo: Refreshing state...
data.avi_virtualservice.site02_vs01: Refreshing state...
data.avi_cloud.site02_default_cloud: Refreshing state...
data.avi_tenant.default_tenant: Refreshing state...
data.avi_cloud.default_cloud: Refreshing state...
data.avi_applicationprofile.site02_system_https_profile: Refreshing state...
data.avi_applicationprofile.site01_system_https_profile: Refreshing state...
data.avi_serviceenginegroup.se_group: Refreshing state...
avi_applicationpersistenceprofile.site02_applicationpersistenceprofile: Creating...
avi_healthmonitor.site02_hm_1: Creating...
avi_sslkeyandcertificate.site02_cert1000: Creating...
avi_vsvip.site02_vs01_vip: Creating...
avi_sslprofile.site02_sslprofile: Creating...
avi_applicationpersistenceprofile.terraform_gslbsite_pesistence: Creating...
avi_healthmonitor.site01_hm_1: Creating...
avi_healthmonitor.terraform_gslbsite_hm01: Creating...
avi_vsvip.site01_vs01_vip: Creating...
avi_pkiprofile.terraform_gslb_pki: Creating...
avi_healthmonitor.site02_hm_1: Creation complete after 1s [id=https://10.1.1.170/api/healthmonitor/healthmonitor-f05a117d-93fe-4a35-b442-391bc815ff8d]
avi_sslprofile.site01_sslprofile: Creating...
avi_applicationpersistenceprofile.site02_applicationpersistenceprofile: Creation complete after 1s [id=https://10.1.1.170/api/applicationpersistenceprofile/applicationpersistenceprofile-2cd82839-0b86-4a25-a212-694c3b8b41b9]
avi_applicationpersistenceprofile.site01_applicationpersistenceprofile: Creating...
avi_sslprofile.site02_sslprofile: Creation complete after 2s [id=https://10.1.1.170/api/sslprofile/sslprofile-fa44f77c-dfe0-494a-902b-e724980d139e]
avi_sslkeyandcertificate.site01_cert1000: Creating...
avi_vsvip.site02_vs01_vip: Creation complete after 2s [id=https://10.1.1.170/api/vsvip/vsvip-2391e848-1b49-4383-ab7a-b2829c6c5406]
avi_pool.site02_pool-1: Creating...
avi_sslkeyandcertificate.site02_cert1000: Creation complete after 2s [id=https://10.1.1.170/api/sslkeyandcertificate/sslkeyandcertificate-90baec49-afa0-4ef3-974d-7357fef77e0d]
avi_pool.site02_pool-2: Creating...
avi_applicationpersistenceprofile.site01_applicationpersistenceprofile: Creation complete after 1s [id=https://10.1.1.250/api/applicationpersistenceprofile/applicationpersistenceprofile-f45f0852-2515-4528-ae65-c48a670ca7ac]
avi_pool.site01_pool-2: Creating...
avi_pool.site02_pool-1: Creation complete after 0s [id=https://10.1.1.170/api/pool/pool-859248df-8ea6-4a00-a8ea-976cc31175a9]
avi_server.site02_server_web21: Creating...
avi_applicationpersistenceprofile.terraform_gslbsite_pesistence: Creation complete after 3s [id=https://10.1.1.250/api/applicationpersistenceprofile/applicationpersistenceprofile-cf887192-0d57-4b91-a7cb-37d787f9aeb2]
avi_server.site02_server_web22: Creating...
avi_sslprofile.site01_sslprofile: Creation complete after 2s [id=https://10.1.1.250/api/sslprofile/sslprofile-1464ded3-7a10-4e76-bfc3-0cdb186ff248]
avi_server.site02_server_web22: Creation complete after 0s [id=pool-859248df-8ea6-4a00-a8ea-976cc31175a9:192.168.202.20:80]
avi_healthmonitor.terraform_gslbsite_hm01: Creation complete after 4s [id=https://10.1.1.250/api/healthmonitor/healthmonitor-003f5015-2a2a-4e65-aff3-1071365a8428]
avi_healthmonitor.site01_hm_1: Creation complete after 4s [id=https://10.1.1.250/api/healthmonitor/healthmonitor-dacd7a40-dc90-4e67-932f-34e94a550fb8]
avi_pool.site01_pool-1: Creating...
avi_vsvip.site01_vs01_vip: Creation complete after 4s [id=https://10.1.1.250/api/vsvip/vsvip-16b0ba87-2703-4fb2-abab-9a8b0bf34ae0]
avi_pool.site02_pool-2: Creation complete after 2s [id=https://10.1.1.170/api/pool/pool-9ca21978-59d5-455f-ba78-01fb9c747b43]
avi_pool.site01_pool-2: Creation complete after 2s [id=https://10.1.1.250/api/pool/pool-47d64222-46b7-4402-ae38-afd47f3f5272]
avi_server.site02_server_web24: Creating...
avi_server.site02_server_web25: Creating...
avi_server.site02_server_web23: Creating...
avi_pool.site01_pool-1: Creation complete after 0s [id=https://10.1.1.250/api/pool/pool-e3c37b13-0950-4320-a643-afa5d3177624]
avi_poolgroup.site02_pg-1: Creating...
avi_server.site01_server_web14: Creating...
avi_server.site01_server_web15: Creating...
avi_server.site01_server_web13: Creating...
avi_poolgroup.site02_pg-1: Creation complete after 1s [id=https://10.1.1.170/api/poolgroup/poolgroup-4197b0b4-d486-455e-8583-bff1fc173fb8]
avi_server.site02_server_web23: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.30:80]
avi_poolgroup.site01_pg-1: Creating...
avi_server.site01_server_web11: Creating...
avi_server.site02_server_web21: Creation complete after 3s [id=pool-859248df-8ea6-4a00-a8ea-976cc31175a9:192.168.202.10:80]
avi_server.site01_server_web12: Creating...
avi_server.site02_server_web25: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.50:80]
avi_virtualservice.gslb_site02_vs01: Creating...
avi_server.site01_server_web13: Creation complete after 1s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.30:80]
avi_server.site02_server_web24: Creation complete after 1s [id=pool-9ca21978-59d5-455f-ba78-01fb9c747b43:192.168.202.40:80]
avi_server.site01_server_web14: Creation complete after 1s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.40:80]
avi_sslkeyandcertificate.site01_cert1000: Creation complete after 3s [id=https://10.1.1.250/api/sslkeyandcertificate/sslkeyandcertificate-1963b9c2-7402-4d32-88f7-b8b57d7bf1e5]
avi_virtualservice.gslb_site02_vs01: Creation complete after 0s [id=https://10.1.1.170/api/virtualservice/virtualservice-310ba2ed-f48f-4a0d-a29e-71a2b9dd2567]
avi_poolgroup.site01_pg-1: Creation complete after 0s [id=https://10.1.1.250/api/poolgroup/poolgroup-21284b51-1f7d-41e3-83c3-078800fdea1d]
avi_virtualservice.gslb_site01_vs01: Creating...
avi_server.site01_server_web15: Creation complete after 2s [id=pool-47d64222-46b7-4402-ae38-afd47f3f5272:192.168.101.50:80]
avi_server.site01_server_web11: Creation complete after 1s [id=pool-e3c37b13-0950-4320-a643-afa5d3177624:192.168.101.10:80]
avi_server.site01_server_web12: Creation complete after 1s [id=pool-e3c37b13-0950-4320-a643-afa5d3177624:192.168.101.20:80]
avi_virtualservice.gslb_site01_vs01: Creation complete after 1s [id=https://10.1.1.250/api/virtualservice/virtualservice-fbecfed3-2397-4df8-9b76-659f50fcc5f8]
avi_pkiprofile.terraform_gslb_pki: Still creating... [10s elapsed]
avi_pkiprofile.terraform_gslb_pki: Creation complete after 11s [id=https://10.1.1.250/api/pkiprofile/pkiprofile-4333ded8-6ec5-43d0-a677-d68a632bc523]
avi_gslbservice.terraform_gslb-01: Creating...
avi_gslbservice.terraform_gslb-01: Creation complete after 2s [id=https://10.1.1.250/api/gslbservice/gslbservice-38f887ef-87ed-446d-a66f-83d42da39289]

Apply complete! Resources: 32 added, 0 changed, 0 destroyed.

This is the end of this blog. Thank you for reading!😀

Automate NSX-T Build with Terraform

Terraform is a widely adopted Infrastructure as Code tool that allow you to define your infrastructure using a simple, declarative programming language, and to deploy and manage infrastructure across public cloud providers including AWS, Azure, Google Cloud & IBM Cloud and other infrastructure providers like VMware NSX-T, F5 Big-IP etc.

In this blog, I will show you how to leverage Terraform NSX-T provider to define a NSX-T tenant environment in minutes.

To build the new NSX-T environment, I am going to:

  1. Create a new Tier1 router named tier1_router;
  2. Create three logical switches under newly created Tier1 router for web/app/db security zone;
  3. Connect the newly created Tier1 router to the existing Tier0 router;
  4. Create a new network service group including SSH and HTTPs;
  5. Create a new firewall section and add a firewall rule to allow outbound SSH/HTTPs traffic from any workload in web logical switch to any workload in app logical switch;

Firstly, I define a Terraform module as below. Note: Terraform module is normally used to define reusable components. For example, the module which I defined here can be re-used to complete non-prod and prod environment build when you provide different input.

/*
provider "nsxt" {
  allow_unverified_ssl = true
  max_retries = 10
  retry_min_delay = 500
  retry_max_delay = 5000
  retry_on_status_codes = [429]
}
*/

data "nsxt_transport_zone" "overlay_transport_zone" {
  display_name = "tz-overlay"
}

data "nsxt_logical_tier0_router" "tier0_router" {
  display_name = "t0"
}

data "nsxt_edge_cluster" "edge_cluster" {
  display_name = "edge-cluster"
}

resource "nsxt_logical_router_link_port_on_tier0" "tier0_port_to_tier1" {
  description = "TIER0_PORT1 provisioned by Terraform"
  display_name = "tier0_port_to_tier1"
  logical_router_id = "${data.nsxt_logical_tier0_router.tier0_router.id}"
  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_tier1_router" "tier1_router" {
  description = "RTR1 provisioned by Terraform"
  display_name = "${var.nsxt_logical_tier1_router_name}"
  #failover_mode = "PREEMPTIVE"
  edge_cluster_id = "${data.nsxt_edge_cluster.edge_cluster.id}"
  enable_router_advertisement = true
  advertise_connected_routes = false
  advertise_static_routes = true
  advertise_nat_routes = true
  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_router_link_port_on_tier1" "tier1_port_to_tier0" {
  description  = "TIER1_PORT1 provisioned by Terraform"
  display_name = "tier1_port_to_tier0"
  logical_router_id = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_router_port_id = "${nsxt_logical_router_link_port_on_tier0.tier0_port_to_tier1.id}"
  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_switch" "LS-terraform-web" {
  admin_state = "UP"
  description = "LogicalSwitch provisioned by Terraform"
  display_name = "${var.logicalswitch1_name}"
  transport_zone_id = "${data.nsxt_transport_zone.overlay_transport_zone.id}"
  replication_mode  = "MTEP"
  tag {
    scope = "ibm"
    tag = "blue"
  }
}

resource "nsxt_logical_switch" "LS-terraform-app" {
  admin_state = "UP"
  description = "LogicalSwitch provisioned by Terraform"
  display_name = "${var.logicalswitch2_name}"
  transport_zone_id = "${data.nsxt_transport_zone.overlay_transport_zone.id}"
  replication_mode  = "MTEP"
  tag {
    scope = "ibm"
    tag = "blue"
  }
}


resource "nsxt_logical_switch" "LS-terraform-db" {
  admin_state = "UP"
  description = "LogicalSwitch provisioned by Terraform"
  display_name = "${var.logicalswitch3_name}"
  transport_zone_id = "${data.nsxt_transport_zone.overlay_transport_zone.id}"
  replication_mode  = "MTEP"
  tag {
    scope = "ibm"
    tag = "blue"
  }
}

resource "nsxt_logical_port" "lp-terraform-web" {
  admin_state = "UP"
  description = "lp provisioned by Terraform"
  display_name = "lp-terraform-web"
  logical_switch_id = "${nsxt_logical_switch.LS-terraform-web.id}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_port" "lp-terraform-app" {
  admin_state = "UP"
  description = "lp provisioned by Terraform"
  display_name = "lp-terraform-app"
  logical_switch_id = "${nsxt_logical_switch.LS-terraform-app.id}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_port" "lp-terraform-db" {
  admin_state = "UP"
  description = "lp provisioned by Terraform"
  display_name = "lp-terraform-db"
  logical_switch_id = "${nsxt_logical_switch.LS-terraform-db.id}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_router_downlink_port" "lif-terraform-web" {
  description = "lif provisioned by Terraform"
  display_name = "lif-terraform-web"
  logical_router_id = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_switch_port_id = "${nsxt_logical_port.lp-terraform-web.id}"
  ip_address = "${var.logicalswitch1_gw}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_router_downlink_port" "lif-terraform-app" {
  description = "lif provisioned by Terraform"
  display_name = "lif-terraform-app"
  logical_router_id = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_switch_port_id = "${nsxt_logical_port.lp-terraform-app.id}"
  ip_address = "${var.logicalswitch2_gw}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_logical_router_downlink_port" "lif-terraform-db" {
  description = "lif provisioned by Terraform"
  display_name = "lif-terraform-db"
  logical_router_id = "${nsxt_logical_tier1_router.tier1_router.id}"
  linked_logical_switch_port_id = "${nsxt_logical_port.lp-terraform-db.id}"
  ip_address = "${var.logicalswitch3_gw}"

  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_l4_port_set_ns_service" "ns_service_tcp_443_22_l4" {
  description = "Service provisioned by Terraform"
  display_name = "web_to_app"
  protocol = "TCP"
  destination_ports = ["443", "22"]
  tag {
    scope = "ibm"
    tag   = "blue"
  }
}

resource "nsxt_firewall_section" "terraform" {
  description = "FS provisioned by Terraform"
  display_name = "Web-App"
  tag {
    scope = "ibm"
    tag = "blue"
  }
  
  applied_to {
    target_type = "LogicalSwitch"
    target_id = "${nsxt_logical_switch.LS-terraform-web.id}"
  }

  section_type = "LAYER3"
  stateful = true

  rule {
    display_name = "out_rule"
    description  = "Out going rule"
    action = "ALLOW"
    logged = true
    ip_protocol = "IPV4"
    direction = "OUT"

    source {
      target_type = "LogicalSwitch"
      target_id = "${nsxt_logical_switch.LS-terraform-web.id}"
    }

    destination {
      target_type = "LogicalSwitch"
      target_id = "${nsxt_logical_switch.LS-terraform-app.id}"
    }
    service {
      target_type = "NSService"
      target_id = "${nsxt_l4_port_set_ns_service.ns_service_tcp_443_22_l4.id}"
    }
    applied_to {
      target_type = "LogicalSwitch"
      target_id = "${nsxt_logical_switch.LS-terraform-web.id}"
    }
  }
}  

output "edge-cluster-id" {
  value = "${data.nsxt_edge_cluster.edge_cluster.id}"
}

output "edge-cluster-deployment_type" {
  value = "${data.nsxt_edge_cluster.edge_cluster.deployment_type}"
}

output "tier0-router-port-id" {
  value = "${nsxt_logical_router_link_port_on_tier0.tier0_port_to_tier1.id}"
}

Then I use the below to call this newly created module:

provider "nsxt" {
  allow_unverified_ssl = true
  max_retries = 10
  retry_min_delay = 500
  retry_max_delay = 5000
  retry_on_status_codes = [429]
}

module "nsxtbuild" {
  source = "/root/terraform/modules/nsxtbuild"
  nsxt_logical_tier1_router_name = "tier1-npr-vr"
  logicalswitch1_name = "npr-web"
  logicalswitch2_name = "npr-app"
  logicalswitch3_name = "npr-db"
  logicalswitch1_gw = "192.168.80.1/24"
  logicalswitch2_gw = "192.168.81.1/24"
  logicalswitch3_gw = "192.168.82.1/24"
}

After “terraform apply”, you can find the required environment is built successfully in NSX Manager.

Logical Switches
T1 vRouter
Service
DFW Rules

Use Terraform to Set Up AWS Auto-Scaling Group with ELB

AWS auto-scaling group helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. By use of  auto-scaling policy, Auto Scaling group can launch or terminate instances as demand on your application increases or decreases.

Today, I will show you how to use Terraform template to setup an AWS auto-scaling group with ELB. My Terraform version is terraform_0.8.8.

My Terraform template includes:

  1. Create a aws_launch_configuration (webcluster) which defines how each EC2 instance will be built for an auto-scaling group;
  2. Create an AWS auto-scaling group (scalegroup);
  3. Create 1st AWS autoscaling policy (autopolicy) for auto-scaling group scale out;
  4. Create 2nd AWS autoscaling policy (autopolicy-down) for auto-scaling group scale in;
  5. Create 1st AWS CloudWatch alarm (cpualarm) to trigger auto-scaling group to scale out;
  6. Create 2nd AWS CloudWatch alarm (cpualarm-down) to trigger auto-scaling group to scale in;
  7. Create a security group (websg) to allow HTTP and management SSH connectivity;
  8. Create an Elastic load balancer with cookie session persistence and use this load balancer in front of auto-scaling group (scalegroup). The ELB will health check all EC2 instances in the auto-scaling group. If any EC2 instance fails the ELB health check, it won’t receive any incoming traffic. If the existing EC2 instances are overloaed (in our case CPU utilisation is over 60%),  the auto-scaling group will create more EC2 instance to handle the spike. On the contrary, the auto-scaling group will scale in when EC2 instance is idle (CPU utilisation is less than 10%).
  9. Create a SSH key pair and use for AWS auto-scaling group (scalegroup);
  10. Create output of ELB DNS;

Template

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_launch_configuration” “webcluster” {
image_id= “ami-4ba3a328”
instance_type = “t2.micro”
security_groups = [“${aws_security_group.websg.id}”]
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, I am WebServer” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

data “aws_availability_zones” “allzones” {}

resource “aws_autoscaling_group” “scalegroup” {
launch_configuration = “${aws_launch_configuration.webcluster.name}”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
min_size = 1
max_size = 4
enabled_metrics = [“GroupMinSize”, “GroupMaxSize”, “GroupDesiredCapacity”, “GroupInServiceInstances”, “GroupTotalInstances”]
metrics_granularity=”1Minute”
load_balancers= [“${aws_elb.elb1.id}”]
health_check_type=”ELB”
tag {
key = “Name”
value = “terraform-asg-example”
propagate_at_launch = true
}
}
resource “aws_autoscaling_policy” “autopolicy” {
name = “terraform-autoplicy”
scaling_adjustment = 1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm” {
alarm_name = “terraform-alarm”
comparison_operator = “GreaterThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “60”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy.arn}”]
}

#
resource “aws_autoscaling_policy” “autopolicy-down” {
name = “terraform-autoplicy-down”
scaling_adjustment = -1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm-down” {
alarm_name = “terraform-alarm-down”
comparison_operator = “LessThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “10”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy-down.arn}”]
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}

resource “aws_security_group” “elbsg” {
name = “security_group_for_elb”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

resource “aws_lb_cookie_stickiness_policy” “cookie_stickness” {
name = “cookiestickness”
load_balancer = “${aws_elb.elb1.id}”
lb_port = 80
cookie_expiration_period = 600
}

output “availabilityzones” {
value = [“${data.aws_availability_zones.allzones.names}”]
}

output “elb-dns” {
value = “${aws_elb.elb1.dns_name}”
}

Output:

LauchConfiguration

Auto_ScalingGroup_lauchconfiguration

Auto_ScalingGroup_lauchconfiguration_UserData

CloudWatchAlarm

CloudWatchAlarm_ScaleUpDown

Auto-scaling Policy

Auto_ScalingGroup_Policy_2

Scale Out

CloudWatchAlarm_ScaleUpDown_4

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_1

Scale In

CloudWatchAlarm_ScaleUpDown_5

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_2

Auto-scaling group

Auto_ScalingGroup_1

ELB

Auto_ScalingGroup_ELB

EC2 Instance

Auto_ScalingGroup_ELB_instances

AWS S3 Bucket for ELB Access Log with Terraform

To storage your AWS ELB access log to ASW S3. We use Terraform template below the below:

  1. Create a new S3 bucket called “elb-log.davidwzhang.com”
  2. Define a bucket policy which grant Elastic Load Balancing access to the newly created S3 bucket “elb-log.davidwzhang.com”. As you know,  each AWS region has its own account ID for Elastic Load Balancing. These account IDs can be found in the link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#d0e10520. variable “aws_elb_account_id. As my template for ap-southeast-2 region,  the account ID for  is 783225319266

Terraform Template:

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_s3_bucket” “elb” {
bucket = “elb-log.davidwzhang.com”
policy = <<EOF
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::783225319266:root”
},
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::elb-log.davidwzhang.com/*”
}
]
}
EOF
}

output “s3_bukcet_arn” {
value = “${aws_s3_bucket.elb.arn}”
}

To enable the access logging for ELB.  we need to update our ELB resource as the below:

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}

health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

instances = [“${aws_instance.web1.id}”,”${aws_instance.web2.id}”]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

Please note I changed access_logs interval to 5mins in the ELB resource defination so that we can verify the output of ELB access log quickly. In production environment, you possibly want change this interval longer, e.g. 120mins.

Output:

  • ELB configuration of access_log in AWS Console

elb_accesslog

  • S3 bucket for ELB access log

elb_accesslog_s3

  • S3 bucket prefix

elb_accesslog_s3_2

  • AWS Region

elb_accesslog_s3_3

  • ELB access-log file in AWS console

elb_accesslog_s3_6

  • ELB access-log content

elb_accesslog_s3_7

AWS ELB with Terraform

Today, I will show you how to build a AWS ELB with Terraform.

My Terraform template includes:

  1. Create 2 EC2 instance as the backe-end member servers.  We will run basic web service (HTTP on TCP 80) on these 2 EC2 instances;
  2. Create a AWS Elastic LB who is listening on TCP 80 and perform health check to verify the status of backend web servers;
  3. Create a security group for ELB, which allows incoming HTTP session to ASW ELB and health check to back-end web servers;
  4. Create a security group on for back-end web server, which allows management SSH connection SSH (TCP22) and ELB health check;

My Terraform template is:

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}
resource “aws_instance” “web1” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
user_data = <<-EOF
#!/bin/bash
echo “hello, I am web1” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-web1”
}
}

resource “aws_instance” “web2” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, I am Web2” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-web2”
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}

data “aws_availability_zones” “allzones” {}
resource “aws_security_group” “elbsg” {
name = “security_group_for_elb”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]

listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

instances = [“${aws_instance.web1.id}”,”${aws_instance.web2.id}”]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

output “availabilityzones” {
value = [“${data.aws_availability_zones.allzones.names}”]
}

output “elb-dns” {
value = “${aws_elb.elb1.dns_name}”
}

Output is as below:

  • ELB

Please note the DNS name for ELB. We will use this DNS name to reach this ELB.

elb_1

  • EC2 Instance

elb_instance_2

elb_instance

  • Health Check

elb_healthcheck

  • Listener

elb_listener

  • Security Group

elb_sg_1

Inbound Rules:

elb_sg_2

Outbound Rule:

elb_sg_3

  • ELB Tag

elb_tag

Load Balancing Function:

To verify the load balancing function, I add a CNAME for this ELB DNS name:

elb_cname

Now I use w3.davidwzhang.com to verify the load balancing works as expected.

Access to Web Server1:

LB_output_1

Access to Web Server2

LB_output_2

Create AWS VPC with Terraform

Today, I will show you how to use Terraform to create a customized VPC in AWS.

Using this Terraform template, I will create a VPC:

  • Name: terraform-vpc
  • IP block for this VPC: 10.0.0.0/16
  • Public Subnet: 10.0.1.0/24. (Note: VM instance in this subnet will have Internet access)
  • Private Subnet: 10.0.100.0/24

To verify the newly created VPC works as expected. my template will create a test EC2 instance in public subnet (10.0.1.0/24) and upload a public key so that I SSH to this new EC2 instance via private key. To verify the new EC2 instance’s Internet connectivity , I include the following in the template as well:

  1. Enable a simple web service on EC2 instance;
  2. Create a security group which allows HTTP (TCP80) is created and associated with this EC2 instance;

 

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}
resource “aws_vpc” “terraform-vpc” {
cidr_block = “10.0.0.0/16”
instance_tenancy = “default”
enable_dns_support = “true”
enable_dns_hostnames = “true”
enable_classiclink = “false”
tags {
Name = “terraform”
}
}

resource “aws_subnet” “public-1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
cidr_block =”10.0.1.0/24″
map_public_ip_on_launch = “true”
availability_zone = “ap-southeast-2b”
tags {
Name = “public”
}
}

resource “aws_subnet” “private-1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
cidr_block =”10.0.100.0/24″
map_public_ip_on_launch = “false”
availability_zone = “ap-southeast-2b”
tags {
Name = “private”
}
}

resource “aws_internet_gateway” “gw” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
tags {
Name = “internet-gateway”
}
}

resource “aws_route_table” “rt1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
route {
cidr_block = “0.0.0.0/0”
gateway_id = “${aws_internet_gateway.gw.id}”
}
tags {
Name = “Default”
}
}

#resource “aws_main_route_table_association” “association-subnet” {
# vpc_id = “${aws_vpc.terraform-vpc.id}”
# route_table_id = “${aws_route_table.rt1.id}”
#}

resource “aws_route_table_association” “association-subnet” {
subnet_id = “${aws_subnet.public-1.id}”
route_table_id = “${aws_route_table.rt1.id}”
}

resource “aws_instance” “terraform_linux” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
subnet_id = “${aws_subnet.public-1.id}”
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, world” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-example”
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
vpc_id = “${aws_vpc.terraform-vpc.id}”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}
output “vpc-id” {
value = “${aws_vpc.terraform-vpc.id}”
}

output “vpc-publicsubnet” {
value = “${aws_subnet.public-1.cidr_block}”
}

output “vpc-publicsubnet-id” {
value = “${aws_subnet.public-1.id}”
}

output “vpc-privatesubnet” {
value = “${aws_subnet.private-1.cidr_block}”
}

output “vpc-privatesubnet-id” {
value = “${aws_subnet.private-1.id}”
}

output “public_ip” {
value = “${aws_instance.terraform_linux.public_ip}”
}

Below is outputs of the Terraform template.

Outputs:

public_ip = 13.54.172.172
vpc-id = vpc-c3a418a7
vpc-privatesubnet = 10.0.100.0/24
vpc-privatesubnet-id = subnet-89dbb9ff
vpc-publicsubnet = 10.0.1.0/24
vpc-publicsubnet-id = subnet-b7d8bac1

We can verify the setting of newly created VPC in AWS Console:

  • VPC

VPC_1

  • Subnets

VPC_subnet

  • Routing Table

VPC_routetable

  • EC2 Instance

VPC_EC2

Browse the WebPage on the test EC2 instance to verify our security group configuration

Webpage

SSH via private key

[dzhang@localhost vpc]$ ssh 13.54.172.172 -l ubuntu -i awskey
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-110-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Sat Mar 25 09:56:52 UTC 2017

System load: 0.16 Memory usage: 5% Processes: 82
Usage of /: 10.1% of 7.74GB Swap usage: 0% Users logged in: 0

Graph this data and manage this system at:
https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@ip-10-0-1-15:~$ exit
logout

Terraform Remote State File on AWS S3

Every time you apply your Terraform template, Terraform will records the current infrastructure status in Terraform state file. By default, the state files are stored locally. Terraform will keep 2 state files for each Terraform template: one is for the current state (terraform.tfstate) and the other is for the second latest version of Terraform state (terraform.tfstate.backup).

In enterprise environment,  the common practise of managing Terraform state files is:

  1. Store the state files in a shared location;
  2. Store all versions of Terraform state file,  which will enable you to rollback to any older version instead of only the second latest version;
  3. Encryption of the state files;

Terraform has offered a built-in support for remote state storage.Currently, Terraform supports a few of remote storage including Aamzon S3, Azure, HashiCorp Consul  and Atlas.

Amazon S3 meet almost all of our requirements:

  1. Aamzon S3 supports encryption (AES-256);
  2. Amazon S3 will stores every version of the state files;
  3. When Terraform talks to ASW s3, TLS (Transport Layer Security) is used;

So here I will shows you how to use Amazon S3 as Terraform remote stage.

Step 1: creata a S3 bucket;

resource “aws_s3_bucket” “my-terraform-state” {
bucket = “my-terraform-state.davidwzhang.com”
versioning {
enabled = true
}

lifecycle {
prevent_destroy = true
}
}

output “s3_bukcet_arn” {
value = “${aws_s3_bucket.my-terraform-state.arn}”
}

AmazonS3

Step 2: configure your Terraform template to use S3 bucket

terraform remote config -backend=s3 -backend-config=”bucket=my-terraform-state.davidwzhang.com” -backend-config=”key=terraform/vpc.tfstate” -backend-config=”region=ap-southeast-2″ -backend-config=”encrypt=true”

AmazonS3-2

Now you can log in your AWS console and check the Terraform state file on ASW s3.

AmazonS3-3

Please note Terraform will still store the current and the second latest state file locally as normal. These state files are stored in the newly created sub-folder .terraform under the Terraform template folder.

[dzhang@localhost vpc]$ ls -al
total 20
-rw-rw-r–. 1 dzhang dzhang 1547 Mar 19 17:15 ~
drwxrwxr-x. 3 dzhang dzhang 74 Mar 20 22:00 .
drwxrwxr-x. 10 dzhang dzhang 4096 Mar 20 21:41 ..
drwxr-xr-x. 2 dzhang dzhang 61 Mar 19 17:10 .terraform
-rw-r–r–. 1 dzhang dzhang 3064 Mar 20 22:00 vpc.tf

[dzhang@localhost .terraform]$ ls -al

total 20
drwxr-xr-x. 2 dzhang dzhang 61 Mar 19 17:10 .
drwxrwxr-x. 3 dzhang dzhang 74 Mar 20 22:00 ..
-rw-rw-r–. 1 dzhang dzhang 750 Mar 24 21:06 terraform.tfstate
-rw-rw-r–. 1 dzhang dzhang 14213 Mar 24 21:05 terraform.tfstate.backup

Automate OpenStack Security Group with Terraform

Heat is the main project in the OpenStack Orchestration program. We can use heat to automate security group implementation. If you have NSXv plugin integrated with your OpenStack environment, you can use Heat template to automate your NSX DFW rules implementation as well. Here I will show you how to use Terraform to do the same magic: automate security group  deployment.

Below is my Terraform template of creating a security group and 5 rules within the newly created security group.

provider “openstack” {
user_name = “${var.openstack_user_name}”
password = “${var.openstack_password}”
tenant_name = “tenant1”
auth_url = “http://keystone.ops.com.au:5000/v3&#8221;
domain_name = “domain1”
}
resource “openstack_networking_secgroup_v2” “secgroup_2” {
name = “secgroup_2”
description = “Terraform security group”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
}
resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_1” {
direction = “egress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 22
port_range_max = 22
remote_ip_prefix = “10.41.129.12/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [“openstack_networking_secgroup_v2.secgroup_2”]

}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_2” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 443
port_range_max = 443
remote_ip_prefix = “10.41.129.12/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [
“openstack_networking_secgroup_v2.secgroup_2”,
“openstack_networking_secgroup_rule_v2.secgroup_rule_1”
]
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_3” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 443
port_range_max = 443
remote_ip_prefix = “10.41.129.11/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [
“openstack_networking_secgroup_v2.secgroup_2”,
“openstack_networking_secgroup_rule_v2.secgroup_rule_2”
]
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_4” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 8080
port_range_max = 8080
remote_ip_prefix = “10.41.129.11/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [
“openstack_networking_secgroup_v2.secgroup_2”,
“openstack_networking_secgroup_rule_v2.secgroup_rule_3”
]
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_5” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 22
port_range_max = 22
remote_ip_prefix = “10.41.129.11/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [
“openstack_networking_secgroup_v2.secgroup_2”,
“openstack_networking_secgroup_rule_v2.secgroup_rule_4”
]
}

Please make sure that you added the resource dependency for each firewall rule via”depends_on”.

If not, you will see erros like the below when you try to “terraform apply”. You will be able only to  add 1 rule when you run “terraform apply” once.

2017/03/06 19:47:46 [TRACE] Preserving existing state lineage “607d13a8-c268-498a-bbb4-07f98f0dd6b4”
Error applying plan:

1 error(s) occurred:

2017/03/06 19:47:46 [DEBUG] plugin: waiting for all plugin processes to complete…
* openstack_networking_secgroup_rule_v2.secgroup2_rule_2: Internal Server Error

Terraform does not automatically rollback in the face of errors.

The above issue is known Issue (Issue ID 7519) with Terraform. (Refer the link: https://github.com/hashicorp/terraform/issues/7519).

Unfortunately, the issue is still in version 0.8.7. The current solution is adding specify explicit dependencies when creating firewall rules.

Automate OpenStack with Terraform

Terraform can be used with Openstack for auto-provisioing.

Today, I will shows a working Terraform example in Openstack.

Firstly, define a Openstack provider for Terraform.

Provider:

provider “openstack” {
user_name = “${var.openstack_user_name}”
password = “${var.openstack_password}”
tenant_name = “project1”
auth_url = “http://keystone.openstack.com.au:5000/v3
domain_name = “DOMAINNAME”
}

Terraform currently support the following Openstack resource type: Compute, Network, Load Balancer, Firewall, Block Storage and Object Storage.

Here, we create a few of basic resources including Compute and Network (network (VXLAN here, but can be VLAN or any other kind of networks), subnet and security group)

Network:

Create a network named “tf-net2

resource “openstack_networking_network_v2” “tf-net2” {
region = “region1”
name = “tf-net2”
admin_state_up = “true”
}

create a subnet “tf_net_sub2” and associate with network tf-net2

resource “openstack_networking_subnet_v2” “tf_net_sub2” {
name = “tf_net_sub2”
region = “region1”
network_id = “${openstack_networking_network_v2.tf-net2.id}”
cidr = “172.16.50.0/24”
ip_version = 4
enable_dhcp = “false”
}

Security Group:

create a security group “secgroup_1” , then add 2 rules

resource “openstack_networking_secgroup_v2” “secgroup_1” {
name = “secgroup_1”
description = “Terraform security group”
}
resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_1” {
direction = “egress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 22
port_range_max = 22
remote_ip_prefix = “10.41.129.12/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_1.id}”
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_2” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 22
port_range_max = 22
remote_ip_prefix = “10.41.129.12/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_1.id}”
}

Compute:

create 1 virtual instance using network tf-net2 and security group secgroup_1 which just created.

resource “openstack_compute_instance_v2” “vm_terraform” {

region = “region1”
availability_zone = “az1”
name = “nsx_terraform”
image_id = “b5d00e5c-ab30-4fb4-9ed0-1d99c7ff864b”
flavor_id = “10”
security_groups = [“${openstack_networking_secgroup_v2.secgroup_1.id}”]

metadata {
this = “that”
}

network {
name = “tf-net2”
}
stop_before_destroy = “true”
}

Result:

Openstack Network:

openstack-network

Security Group:

securitygroup

VM:

vm