Create XML file in vRealize Orchestrator for NSX Automation

NSX API uses XML format for API communication. To automate NSX in VMware vRealize Orchestror, it is always required to create a XML file with javascript  as vRO workflow supports javascript only.Here i only shows you an example to how to do it.

The target here is to create a security group and add a simple firewall rule in this newly created security group.

Note: this vRO workflow has 2 inputs:
securityGroupName, description
And 2 properties:
nsxManagerRestHost, realtime(equal to sgID in Step1)

Step1: create a security group

var xmlbody = new XML('<securitygroup />');
xmlbody.objectId = " ";
xmlbody.type.typeName = " ";
xmlbody.description = description;
xmlbody.name = securityGroupName;
xmlbody.revision = 0;
xmlbody.objectTypeName = " ";
System.log(xmlbody);
var request = nsxManagerRestHost.createRequest("POST", "/2.0/services/securitygroup/bulk/globalroot-0", xmlbody.toString());
request.contentType = "application/xml";
System.log("Creating a SecurityGroup " + securityGroupName);
System.log("POST Request URL: " + request.fullUrl);
var response = request.execute();
if (response.statusCode == 201) {
	System.debug("Successfully created Security Group " + securityGroupName);
	}
else {
	throw("Failed to SecurityGroup " + securityGroupName);
	}
sgID = response.getAllHeaders().get("Location").split('/').pop();
realtime=sgID

Step2: add a section in DFW and add a firewall rules

//create XML object for DFW source;
var rulesources = new XML('<sources excluded="false" />');
rulesources.source.name = " ";
rulesources.source.value = "10.47.161.23";
rulesources.source.type = "Ipv4Address";
rulesources.source.isValid = 'true';
System.log("Source: "+rulesources);

//create XML object for DFW destination;
var ruledestionations = new XML('<destinations excluded="false" />');
ruledestionations.destination.name = " ";
ruledestionations.destination.value = "10.47.161.24";
ruledestionations.destination.type = "Ipv4Address";
ruledestionations.destination.isValid = 'true';
System.log("Destination: "+ruledestionations);

//create XML object for DFW service
var ruleservices = new XML('<services />');
ruleservices.service.destinationPort = "80";
ruleservices.service.protocol = "6";
ruleservices.service.subProtocol = "6";
ruleservices.service.isValid = 'true';
System.log("Service: "+ruleservices);

//create XML object for the whole rule
var xmlbodyrule = new XML('<rule disabled="false" logged="true" />');
xmlbodyrule.name = "vro created rule";
xmlbodyrule.action = "allow";
xmlbodyrule.notes = " ";
xmlbodyrule.appliedToList.appliedTo.name = securityGroupName;
xmlbodyrule.appliedToList.appliedTo.value = realtime;
xmlbodyrule.appliedToList.appliedTo.type = 'SecurityGroup';
xmlbodyrule.appliedToList.appliedTo.isValid = 'true';
xmlbodyrule.sectionId = " ";
xmlbodyrule.sources = rulesources;
xmlbodyrule.destinations = ruledestionations;
xmlbodyrule.services = ruleservices;

//create XML object for DFW section
var xmlbody = new XML(
<section name ={securityGroupName} />);
//xmlbody.rule = 'disabled="false" logged="true" />';
xmlbody.rule=xmlbodyrule;
System.log("XML file for new rules: "+xmlbody);

var request = nsxManagerRestHost.createRequest("POST", "/4.0/firewall/globalroot-0/config/layer3sections", xmlbody.toString());
request.contentType = "application/xml";
var response = request.execute();
if (response.statusCode == 201) {
	System.debug("Successfully created Security Group Section" + securityGroupName);
	}
else {
	throw("Failed to SecurityGroup Section" + securityGroupName);
	}

Below is the output of XML file for creating a security group:

<securitygroup>
  <objectId></objectId>
  <type>
    <typeName></typeName>
  </type>
  <description>nsx1001test</description>
  <name>nsx1001test</name>
  <revision>0</revision>
  <objectTypeName></objectTypeName>
</securitygroup>

XML file for creating a NSX DFW section and adding a new simple firewall rules:

<section name="nsx1001test">
  <rule disabled="false" logged="true">
    <name>vro created rule</name>
    <action>allow</action>
    <notes></notes>
    <appliedToList>
      <appliedTo>
        <name>nsx1001test</name>
        <value>securitygroup-947</value>
        <type>SecurityGroup</type>
        <isValid>true</isValid>
      </appliedTo>
    </appliedToList>
    <sectionId></sectionId>
    <sources excluded="false">
      <source>
        <name></name>
        <value>10.47.161.23</value>
        <type>Ipv4Address</type>
        <isValid>true</isValid>
      </source>
    </sources>
    <destinations excluded="false">
      <destination>
        <name></name>
        <value>10.47.161.24</value>
        <type>Ipv4Address</type>
        <isValid>true</isValid>
      </destination>
    </destinations>
    <services>
      <service>
        <destinationPort>80</destinationPort>
        <protocol>6</protocol>
        <subProtocol>6</subProtocol>
        <isValid>true</isValid>
      </service>
    </services>
  </rule>
</section>

New Ansible F5 HTTPs Health Monitor Module

Just got time this weekend to test the newly released dev version of Ansible F5 HTTPs health monitor. The result of testing looks good: most of common use cases have been covered properly.

Below is my first playbook for my testing:

# This version is to create a new https health monitor
---
- name: f5 config
  hosts:  lb.davidwzhang.com
  connection: local
  vars:
    ports:
      - 443
  tasks:
    - name: creat http healthmonitor
      bigip_monitor_https:
        state:  "present"
        #state: "absent"
        name: "ansible-httpshealthmonitor"
        password: "password"
        server: "10.1.1.122"
        user: "admin"
        validate_certs: "no"
        send: "Get /cgi-bin/env.sh HTTP/1.1\r\nHost:192.168.72.28\r\nConnection: Close\r\n"
        receive: "web"
        interval: "3"
        timeout: "10"
      delegate_to:  localhost

After run the playbook, I log in my F5 BIGIP VE and see the https health monitor has been created successfully.
f5 https healthmonitor

I tried to create another HTTPs health monitor, which includes basic authentication(admin/password) and customized alias address and alias service port(8443).
Playbook:

# This version is to create a new HTTP health monitor
---
- name: f5 config
  hosts:  lb.davidwzhang.com
  connection: local
  vars:
    ports:
      - 443
  tasks:
    - name: creat http healthmonitor
      bigip_monitor_https:
        state:  "present"
        #state: "absent"
        name: "ansible-httpshealthmonitor02"
        password: "password"
        server: "10.1.1.122"
        user: "admin"
        validate_certs: "no"
        ip: "192.168.72.128"
        port: "8443"
        send: "Get /cgi-bin/env.sh\r\n"
        receive: "200"
        interval: "3"
        timeout: "10"
        target_username: "admin"
        target_password: "password"
      delegate_to:  localhost

In F5, you can see the below:
f5 https healthmonitor02

In addition, you possibly noticed that I comment a line in the above 2 playbooks:

#state: "absent"

You can use it to remove the health monitor.

vRA7.3 and NSX Integration: Network Security Data Collection Failure

We are building vRA 7.3 . We added vCenter and NSX manager as endpoint in vRA. And associate NSX manager with vCenter. All of computing resource data collection works well but not NSX (network and security):

So in vRA reservation, we only can see vSphere cluster, vDS port-group/logical switch but not Transport zone, security group/tags

When check the log, we see the following:

Workflow ‘vSphereVCNSInventory’ failed with the following exception:

One or more errors occurred.

Inner Exception: An error occurred while sending the request.

at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)

at DynamicOps.VCNSModel.Interface.NSXClient.GetDatacenters()

at DynamicOps.VCNSModel.Activities.CollectDatacenters.Execute(CodeActivityContext context)

at System.Activities.CodeActivity.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)

at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)

Inner Exception:

VCNS Workflow failure

I tried to delete NSX end point and recreate from vRA but no luck. I raised the issue in vmware community but can’t get any real valuable feedback.

After a few hours investigation, finally I find a fix:

run the “create a NSX endpoint” workflow in vRO as the below

2017-07-26_184701

Then I re-start network & security data collection in vRA. Everything works and I can see all defined NSX Transport Zone, security groups and DLR in vRA network reservations.

Hope this fix can help others who have the same issue.

Simple Python Script Creating a Dynamic Membership Security Group

In this blog, I developed a very simple Python scripts to create NSX security group whose membership is based on Security Tag. Please note this script is to show you the basic, which has not been ready for a production environment.

Two Python functions are includes in this script:

  1. create_tag is used to create a NSX security tag;
  2. create_sg is used to create a security group and define a criterion to add all virtual machines tagged with the specified security tag into this newly created security group;
import requests
from base64 import b64encode
import getpass
username=raw_input('Enter Your NSXManager Username: ')
yourpass = getpass.getpass('Enter Your NSXManager Password: ')
sg_name=raw_input('Enter Security Group Name: ')
vm_tag=raw_input('Enter Tag Name: ')
userandpass=username+":"+yourpass
userpass = b64encode(userandpass).decode("ascii")
auth ="Basic " + userpass
payload_tag="<securityTag>\r\n<objectTypeName>SecurityTag</objectTypeName>\r\n<type>\r\n<typeName>SecurityTag</typeName>\r\n</type>\r\n<name>"+vm_tag+"</name>\r\n<isUniversal>false</isUniversal>\r\n<description>This tage is created by API</description>\r\n<extendedAttributes></extendedAttributes>\r\n</securityTag>"
payload_sg= "<securitygroup>\r\n <objectId></objectId>\r\n <objectTypeName>SecurityGroup</objectTypeName>\r\n <type>\r\n <typeName>SecurityGroup</typeName>\r\n </type>\r\n <description></description>\r\n <name>"+sg_name+"</name>\r\n <revision>0</revision>\r\n<dynamicMemberDefinition>\r\n <dynamicSet>\r\n <operator>OR</operator>\r\n <dynamicCriteria>\r\n <operator>OR</operator>\r\n <key>VM.SECURITY_TAG</key>\r\n <criteria>contains</criteria>\r\n <value>"+vm_tag+"</value>\r\n </dynamicCriteria>\r\n </dynamicSet>\r\n</dynamicMemberDefinition>\r\n</securitygroup>"

def create_tag():
        try:
                response = requests.post(
                url="https://NSX-Manager-IP/api/2.0/services/securitytags/tag",
                verify=False,
                headers={
                        "Authorization": auth,
                        "Content-Type": "application/xml",
                    },
                data=payload_tag
                    )
                print('Response HTTP Status Code: {status_code}'.format(status_code=response.status_code))
                #print('Response HTTP Response Body: {content}'.format(content=response.content))
                if response.status_code == 403:
                        print "***********************************************************************"
                        print "WARNING: your username or password is wrong, please retry again!"
                        print "***********************************************************************"
                if  response.status_code == 201:
                        print "***********************************************************************"
                        print('Response HTTP Response Body: {content}'.format(content=response.content))
                api_response=response.text
                print api_response
        except requests.exceptions.RequestException:
                print('HTTP Request failed')

def create_sg():
        try:
                response = requests.post(
                url="https://NSX-Manager-IP/api/2.0/services/securitygroup/bulk/globalroot-0",
                verify=False,
                headers={
                        "Authorization": auth,
                        "Content-Type": "application/xml",
                    },
                data=payload_sg
                    )
                print('Response HTTP Status Code: {status_code}'.format(status_code=response.status_code))
                #print('Response HTTP Response Body: {content}'.format(content=response.content))
                if response.status_code == 403:
                        print "***********************************************************************"
                        print "WARNING: your username or password is wrong, please retry again!"
                        print "***********************************************************************"
                if  response.status_code == 201:
                        print "***********************************************************************"
                        print('Response HTTP Response Body: {content}'.format(content=response.content))
                api_response=response.text
                print api_response
        except requests.exceptions.RequestException:
                print('HTTP Request failed')

Run this script in our O-Dev:

[root]$ python create_sg_dynamic_member_20170429.py

Enter Your NSXManager UserName: admin

Enter Your NSXManager Passowrd:

Enter Security Group Name: sg_app1_web

Enter Tag Name: tag_app1_web

Response HTTP Status Code: 201

***********************************************************************

Response HTTP Response Body: securitytag-14

securitytag-14

Response HTTP Status Code: 201

***********************************************************************

Response HTTP Response Body: securitygroup-485

securitygroup-485

In NSX manager, we can see a securtiy group sg_app1_web is created as below:

2017-04-30_140657

And its dynamic membeship criterion is:

2017-04-30_140729

Automate F5 GSLB with Ansible

F5 BIG-IP Global Traffic Manager (GTM) provides tiered global server load balancing (GSLB). BIG-IP GTM distributes DNS name resolution requests, first to the best available pool in a wide IP, and then to the best available virtual server within that pool. GTM selects the best available resource using either a static or a dynamic load balancing method. Using a static load balancing method, BIG-IP GTM selects a resource based on a pre-defined pattern. Using a dynamic load balancing method, BIG-IP GTM selects a resource based on current performance metrics collected by the big3d agents running in each data center.

So F5 GSLB configuration logic for a DNS record is as below:

  • Define a Data Center, e.g. “SL-SYD-Site1”;
  • Define a server which can be F5 LTM or any other kind of local load balancer or host;

GTM Server Type

  • Create virtual servers if you don’t use F5 BigIP LTM or you don’t “Virtual Server Discovery” feature for your F5 BigIP LTM
  • Create GTM pool/pools using virtual server as member of this newly created pool;
  • Create Wide-IP which points to the GTM pool/pools which you defined in the previous step ; Note: F5 module in Ansible 2.3 still doesn’t support the association of GTM pool with wide-ip.

Unlike F5 BigIP LTM, Ansible F5 module doesn’t support F5 BigIP GTM very well. The known limitation of automating F5 GSLB configuration with Ansible version 2.3 includes:

  1. Doesn’t support setting up a server; (Luckily, if you are using F5 BigIP LTM, this is one-off task: you only need to perform this task once for each LTM.)
  2. Doesn’t support adding pool member when you create a GTM pool;
  3. Doesn’t support adding pool when you create a wide ip;
  4. Doesn’t support health monitor when you create GTM virtual server and GTM pool;

To accommodate these above limitation, I pre-defined a F5 LTM server called “myLTM”. 

2017-04-25_164141

After running my Ansible playbook, I manually add pool member into newly created GTM pool and add the GTM pool to wideip as well.

GTMPool_Adding_Member

GTM_WideIP_Pool

My playbook YAML file:

– name: f5 config
hosts: lb.davidwzhang.com
connection: local
tasks:
– name: create a GTM DC SL-SYD-Site1
bigip_gtm_datacenter:
password: “password”
server: “10.1.1.122”
user: “admin”
name: “SL-SYD-Site1”
validate_certs: “no”
delegate_to: localhost
– name: create a virtual server myVIP
bigip_gtm_virtual_server:
password: “password”
server: “10.1.1.122”
user: “admin”
virtual_server_name: “myVIP”
virtual_server_server: “myLTM”
validate_certs: “no”
port: “80”
address: “192.168.72.199”
state: “present”
delegate_to: localhost

– name: create GTM pool: mypool
bigip_gtm_pool:
server: “10.1.1.122”
user: “admin”
password: “password”
name: “mypool”
state: “present”
type: “a”
validate_certs: “no”
delegate_to: localhost

– name: create a wideip w3.davidwzhang.com
bigip_gtm_wide_ip:
server: “10.1.1.122”
user: “admin”
password: “password”
lb_method: “round_robin”
name: “w3.davidwzhang.com”
type: “a”
state: “present”
validate_certs: “no”
delegate_to: localhost

Ansible Playbook Output:

GTM DC

GTM_DC

GTM Virtual Server

GTM_Server_VS

GTM Pool

GTM_Pool

GTM WideIP

GTM_WideIP

NSlookup for wideip: w3.davidwzhang.com

GTM_Nslookup

Automate F5 LTM with Ansible

Ansible has included F5 as extra network module, which can help to provide LBaaS by use of Infrastructure as Code method.

Like normal Ansible modules,  Ansible F5 module is installed the /usr/lib/python2.7/site-packages/ansible/modules/extras/network directory.

[dzhang@localhost network]$ pwd
/usr/lib/python2.7/site-packages/ansible/modules/extras/network
[dzhang@localhost network]$ ls -al
total 512
drwxr-xr-x. 9 root root 4096 Jan 30 03:17 .
drwxr-xr-x. 20 root root 4096 Jan 30 03:17 ..
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 a10
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 asa
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 citrix
-rw-r–r–. 1 root root 24673 Jan 16 08:48 cloudflare_dns.py
-rw-r–r–. 2 root root 18915 Jan 16 14:36 cloudflare_dns.pyc
-rw-r–r–. 2 root root 18915 Jan 16 14:36 cloudflare_dns.pyo
-rw-r–r–. 1 root root 11833 Jan 16 08:48 dnsimple.py
-rw-r–r–. 2 root root 8642 Jan 16 14:36 dnsimple.pyc
-rw-r–r–. 2 root root 8642 Jan 16 14:36 dnsimple.pyo
-rw-r–r–. 1 root root 13723 Jan 16 08:48 dnsmadeeasy.py
-rw-r–r–. 2 root root 12410 Jan 16 14:36 dnsmadeeasy.pyc
-rw-r–r–. 2 root root 12410 Jan 16 14:36 dnsmadeeasy.pyo
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 exoscale
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 f5
-rw-r–r–. 1 root root 13404 Jan 16 08:48 haproxy.py
-rw-r–r–. 2 root root 13504 Jan 16 14:36 haproxy.pyc
-rw-r–r–. 2 root root 13504 Jan 16 14:36 haproxy.pyo

The Ansible playbook below is to:

  • Create 2 web servers as F5 LTM nodes;
  • Create a F5 LTM pool;
  • Add the 2 web servers as the member for F5 LTM pool which is created in Step 2;
  • Create a F5 LTM vIP and associate the F5 LTM pool created in Step 2 with this newly created vIP;

[dzhang@localhost f5]$ cat f5-v0.4.yml

– name: f5 config
hosts: lb.davidwzhang.com
connection: local

tasks:
– name: create a pool
bigip_pool:
lb_method: “ratio_member”
name: “web”
password: “password”
server: “10.1.1.122”
user: “admin”
validate_certs: “no”
monitors:
– /Common/http
delegate_to: localhost
– name: create a node1
bigip_node:
host: “192.168.72.168”
name: “node-1”
password: “password”
server: “10.1.1.122”
user: “admin”
monitors:
– /Common/icmp
validate_certs: “no”
delegate_to: localhost
– name: create a node2
bigip_node:
host: “192.168.72.128”
name: “node-2”
password: “password”
server: “10.1.1.122”
user: “admin”
monitors:
– /Common/icmp
validate_certs: “no”
delegate_to: localhost
– name: add a node to pool
bigip_pool_member:
description: “webservers”
host: “{{item.host}}”
name: “{{item.name}}”
password: “password”
pool: “web”
port: “80”
connection_limit: “0”
monitor_state: “enabled”
server: “10.1.1.122”
user: “admin”
validate_certs: “no”
delegate_to: localhost
with_items:
– host: “192.168.72.168”
name: “node-1”
– host: “192.168.72.128”
name: “node-2”
– name: create a VIP
bigip_virtual_server:
description: “ansible-vip”
destination: “192.168.72.199”
name: “ansible-vip”
pool: “web”
port: “80”
snat: “Automap”
password: “password”
server: “10.1.1.122”
user: “admin”
all_profiles:
– “http”
validate_certs: “no”
delegate_to: localhost

Run the playbook:

[dzhang@localhost f5]$ ansible-playbook f5-v0.4.yml
/usr/lib64/python2.7/site-packages/cffi/model.py:525: UserWarning: ‘point_conversion_form_t’ has no values explicitly defined; guessing that it is equivalent to ‘unsigned int’
% self._get_c_name())

PLAY [f5 config] ***************************************************************

TASK [setup] *******************************************************************
ok: [lb.davidwzhang.com]

TASK [create a pool] ***********************************************************
ok: [lb.davidwzhang.com -> localhost]

TASK [create a node1] **********************************************************
ok: [lb.davidwzhang.com -> localhost]

TASK [create a node2] **********************************************************
ok: [lb.davidwzhang.com -> localhost]

TASK [add a node to pool] ******************************************************
ok: [lb.davidwzhang.com -> localhost] => (item={u’host’: u’192.168.72.168′, u’name’: u’node-1′})
ok: [lb.davidwzhang.com -> localhost] => (item={u’host’: u’192.168.72.128′, u’name’: u’node-2′})

TASK [create a VIP] ************************************************************
ok: [lb.davidwzhang.com -> localhost]

PLAY RECAP *********************************************************************
lb.davidwzhang.com : ok=6 changed=0 unreachable=0 failed=0

In F5 LTM, you can verify the configuration:

LTM Node:

NodeList

LTM Pool

Pool1

Pool2

LTM vIP

vIP

Use Terraform to Set Up AWS Auto-Scaling Group with ELB

AWS auto-scaling group helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. By use of  auto-scaling policy, Auto Scaling group can launch or terminate instances as demand on your application increases or decreases.

Today, I will show you how to use Terraform template to setup an AWS auto-scaling group with ELB. My Terraform version is terraform_0.8.8.

My Terraform template includes:

  1. Create a aws_launch_configuration (webcluster) which defines how each EC2 instance will be built for an auto-scaling group;
  2. Create an AWS auto-scaling group (scalegroup);
  3. Create 1st AWS autoscaling policy (autopolicy) for auto-scaling group scale out;
  4. Create 2nd AWS autoscaling policy (autopolicy-down) for auto-scaling group scale in;
  5. Create 1st AWS CloudWatch alarm (cpualarm) to trigger auto-scaling group to scale out;
  6. Create 2nd AWS CloudWatch alarm (cpualarm-down) to trigger auto-scaling group to scale in;
  7. Create a security group (websg) to allow HTTP and management SSH connectivity;
  8. Create an Elastic load balancer with cookie session persistence and use this load balancer in front of auto-scaling group (scalegroup). The ELB will health check all EC2 instances in the auto-scaling group. If any EC2 instance fails the ELB health check, it won’t receive any incoming traffic. If the existing EC2 instances are overloaed (in our case CPU utilisation is over 60%),  the auto-scaling group will create more EC2 instance to handle the spike. On the contrary, the auto-scaling group will scale in when EC2 instance is idle (CPU utilisation is less than 10%).
  9. Create a SSH key pair and use for AWS auto-scaling group (scalegroup);
  10. Create output of ELB DNS;

Template

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_launch_configuration” “webcluster” {
image_id= “ami-4ba3a328”
instance_type = “t2.micro”
security_groups = [“${aws_security_group.websg.id}”]
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, I am WebServer” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

data “aws_availability_zones” “allzones” {}

resource “aws_autoscaling_group” “scalegroup” {
launch_configuration = “${aws_launch_configuration.webcluster.name}”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
min_size = 1
max_size = 4
enabled_metrics = [“GroupMinSize”, “GroupMaxSize”, “GroupDesiredCapacity”, “GroupInServiceInstances”, “GroupTotalInstances”]
metrics_granularity=”1Minute”
load_balancers= [“${aws_elb.elb1.id}”]
health_check_type=”ELB”
tag {
key = “Name”
value = “terraform-asg-example”
propagate_at_launch = true
}
}
resource “aws_autoscaling_policy” “autopolicy” {
name = “terraform-autoplicy”
scaling_adjustment = 1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm” {
alarm_name = “terraform-alarm”
comparison_operator = “GreaterThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “60”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy.arn}”]
}

#
resource “aws_autoscaling_policy” “autopolicy-down” {
name = “terraform-autoplicy-down”
scaling_adjustment = -1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm-down” {
alarm_name = “terraform-alarm-down”
comparison_operator = “LessThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “10”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy-down.arn}”]
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}

resource “aws_security_group” “elbsg” {
name = “security_group_for_elb”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

resource “aws_lb_cookie_stickiness_policy” “cookie_stickness” {
name = “cookiestickness”
load_balancer = “${aws_elb.elb1.id}”
lb_port = 80
cookie_expiration_period = 600
}

output “availabilityzones” {
value = [“${data.aws_availability_zones.allzones.names}”]
}

output “elb-dns” {
value = “${aws_elb.elb1.dns_name}”
}

Output:

LauchConfiguration

Auto_ScalingGroup_lauchconfiguration

Auto_ScalingGroup_lauchconfiguration_UserData

CloudWatchAlarm

CloudWatchAlarm_ScaleUpDown

Auto-scaling Policy

Auto_ScalingGroup_Policy_2

Scale Out

CloudWatchAlarm_ScaleUpDown_4

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_1

Scale In

CloudWatchAlarm_ScaleUpDown_5

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_2

Auto-scaling group

Auto_ScalingGroup_1

ELB

Auto_ScalingGroup_ELB

EC2 Instance

Auto_ScalingGroup_ELB_instances