Simple Python Script Creating a Dynamic Membership Security Group

In this blog, I developed a very simple Python scripts to create NSX security group whose membership is based on Security Tag. Please note this script is to show you the basic, which has not been ready for a production environment.

Two Python functions are includes in this script:

  1. create_tag is used to create a NSX security tag;
  2. create_sg is used to create a security group and define a criterion to add all virtual machines tagged with the specified security tag into this newly created security group;
import requests
from base64 import b64encode
import getpass
username=raw_input('Enter Your NSXManager Username: ')
yourpass = getpass.getpass('Enter Your NSXManager Password: ')
sg_name=raw_input('Enter Security Group Name: ')
vm_tag=raw_input('Enter Tag Name: ')
userandpass=username+":"+yourpass
userpass = b64encode(userandpass).decode("ascii")
auth ="Basic " + userpass
payload_tag="<securityTag>\r\n<objectTypeName>SecurityTag</objectTypeName>\r\n<type>\r\n<typeName>SecurityTag</typeName>\r\n</type>\r\n<name>"+vm_tag+"</name>\r\n<isUniversal>false</isUniversal>\r\n<description>This tage is created by API</description>\r\n<extendedAttributes></extendedAttributes>\r\n</securityTag>"
payload_sg= "<securitygroup>\r\n <objectId></objectId>\r\n <objectTypeName>SecurityGroup</objectTypeName>\r\n <type>\r\n <typeName>SecurityGroup</typeName>\r\n </type>\r\n <description></description>\r\n <name>"+sg_name+"</name>\r\n <revision>0</revision>\r\n<dynamicMemberDefinition>\r\n <dynamicSet>\r\n <operator>OR</operator>\r\n <dynamicCriteria>\r\n <operator>OR</operator>\r\n <key>VM.SECURITY_TAG</key>\r\n <criteria>contains</criteria>\r\n <value>"+vm_tag+"</value>\r\n </dynamicCriteria>\r\n </dynamicSet>\r\n</dynamicMemberDefinition>\r\n</securitygroup>"

def create_tag():
        try:
                response = requests.post(
                url="https://NSX-Manager-IP/api/2.0/services/securitytags/tag",
                verify=False,
                headers={
                        "Authorization": auth,
                        "Content-Type": "application/xml",
                    },
                data=payload_tag
                    )
                print('Response HTTP Status Code: {status_code}'.format(status_code=response.status_code))
                #print('Response HTTP Response Body: {content}'.format(content=response.content))
                if response.status_code == 403:
                        print "***********************************************************************"
                        print "WARNING: your username or password is wrong, please retry again!"
                        print "***********************************************************************"
                if  response.status_code == 201:
                        print "***********************************************************************"
                        print('Response HTTP Response Body: {content}'.format(content=response.content))
                api_response=response.text
                print api_response
        except requests.exceptions.RequestException:
                print('HTTP Request failed')

def create_sg():
        try:
                response = requests.post(
                url="https://NSX-Manager-IP/api/2.0/services/securitygroup/bulk/globalroot-0",
                verify=False,
                headers={
                        "Authorization": auth,
                        "Content-Type": "application/xml",
                    },
                data=payload_sg
                    )
                print('Response HTTP Status Code: {status_code}'.format(status_code=response.status_code))
                #print('Response HTTP Response Body: {content}'.format(content=response.content))
                if response.status_code == 403:
                        print "***********************************************************************"
                        print "WARNING: your username or password is wrong, please retry again!"
                        print "***********************************************************************"
                if  response.status_code == 201:
                        print "***********************************************************************"
                        print('Response HTTP Response Body: {content}'.format(content=response.content))
                api_response=response.text
                print api_response
        except requests.exceptions.RequestException:
                print('HTTP Request failed')

Run this script in our O-Dev:

[root]$ python create_sg_dynamic_member_20170429.py

Enter Your NSXManager UserName: admin

Enter Your NSXManager Passowrd:

Enter Security Group Name: sg_app1_web

Enter Tag Name: tag_app1_web

Response HTTP Status Code: 201

***********************************************************************

Response HTTP Response Body: securitytag-14

securitytag-14

Response HTTP Status Code: 201

***********************************************************************

Response HTTP Response Body: securitygroup-485

securitygroup-485

In NSX manager, we can see a securtiy group sg_app1_web is created as below:

2017-04-30_140657

And its dynamic membeship criterion is:

2017-04-30_140729

Automate F5 GSLB with Ansible

F5 BIG-IP Global Traffic Manager (GTM) provides tiered global server load balancing (GSLB). BIG-IP GTM distributes DNS name resolution requests, first to the best available pool in a wide IP, and then to the best available virtual server within that pool. GTM selects the best available resource using either a static or a dynamic load balancing method. Using a static load balancing method, BIG-IP GTM selects a resource based on a pre-defined pattern. Using a dynamic load balancing method, BIG-IP GTM selects a resource based on current performance metrics collected by the big3d agents running in each data center.

So F5 GSLB configuration logic for a DNS record is as below:

  • Define a Data Center, e.g. “SL-SYD-Site1”;
  • Define a server which can be F5 LTM or any other kind of local load balancer or host;

GTM Server Type

  • Create virtual servers if you don’t use F5 BigIP LTM or you don’t “Virtual Server Discovery” feature for your F5 BigIP LTM
  • Create GTM pool/pools using virtual server as member of this newly created pool;
  • Create Wide-IP which points to the GTM pool/pools which you defined in the previous step ; Note: F5 module in Ansible 2.3 still doesn’t support the association of GTM pool with wide-ip.

Unlike F5 BigIP LTM, Ansible F5 module doesn’t support F5 BigIP GTM very well. The known limitation of automating F5 GSLB configuration with Ansible version 2.3 includes:

  1. Doesn’t support setting up a server; (Luckily, if you are using F5 BigIP LTM, this is one-off task: you only need to perform this task once for each LTM.)
  2. Doesn’t support adding pool member when you create a GTM pool;
  3. Doesn’t support adding pool when you create a wide ip;
  4. Doesn’t support health monitor when you create GTM virtual server and GTM pool;

To accommodate these above limitation, I pre-defined a F5 LTM server called “myLTM”. 

2017-04-25_164141

After running my Ansible playbook, I manually add pool member into newly created GTM pool and add the GTM pool to wideip as well.

GTMPool_Adding_Member

GTM_WideIP_Pool

My playbook YAML file:

– name: f5 config
hosts: lb.davidwzhang.com
connection: local
tasks:
– name: create a GTM DC SL-SYD-Site1
bigip_gtm_datacenter:
password: “password”
server: “10.1.1.122”
user: “admin”
name: “SL-SYD-Site1”
validate_certs: “no”
delegate_to: localhost
– name: create a virtual server myVIP
bigip_gtm_virtual_server:
password: “password”
server: “10.1.1.122”
user: “admin”
virtual_server_name: “myVIP”
virtual_server_server: “myLTM”
validate_certs: “no”
port: “80”
address: “192.168.72.199”
state: “present”
delegate_to: localhost

– name: create GTM pool: mypool
bigip_gtm_pool:
server: “10.1.1.122”
user: “admin”
password: “password”
name: “mypool”
state: “present”
type: “a”
validate_certs: “no”
delegate_to: localhost

– name: create a wideip w3.davidwzhang.com
bigip_gtm_wide_ip:
server: “10.1.1.122”
user: “admin”
password: “password”
lb_method: “round_robin”
name: “w3.davidwzhang.com”
type: “a”
state: “present”
validate_certs: “no”
delegate_to: localhost

Ansible Playbook Output:

GTM DC

GTM_DC

GTM Virtual Server

GTM_Server_VS

GTM Pool

GTM_Pool

GTM WideIP

GTM_WideIP

NSlookup for wideip: w3.davidwzhang.com

GTM_Nslookup

Automate F5 LTM with Ansible

Ansible has included F5 as extra network module, which can help to provide LBaaS by use of Infrastructure as Code method.

Like normal Ansible modules,  Ansible F5 module is installed the /usr/lib/python2.7/site-packages/ansible/modules/extras/network directory.

[dzhang@localhost network]$ pwd
/usr/lib/python2.7/site-packages/ansible/modules/extras/network
[dzhang@localhost network]$ ls -al
total 512
drwxr-xr-x. 9 root root 4096 Jan 30 03:17 .
drwxr-xr-x. 20 root root 4096 Jan 30 03:17 ..
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 a10
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 asa
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 citrix
-rw-r–r–. 1 root root 24673 Jan 16 08:48 cloudflare_dns.py
-rw-r–r–. 2 root root 18915 Jan 16 14:36 cloudflare_dns.pyc
-rw-r–r–. 2 root root 18915 Jan 16 14:36 cloudflare_dns.pyo
-rw-r–r–. 1 root root 11833 Jan 16 08:48 dnsimple.py
-rw-r–r–. 2 root root 8642 Jan 16 14:36 dnsimple.pyc
-rw-r–r–. 2 root root 8642 Jan 16 14:36 dnsimple.pyo
-rw-r–r–. 1 root root 13723 Jan 16 08:48 dnsmadeeasy.py
-rw-r–r–. 2 root root 12410 Jan 16 14:36 dnsmadeeasy.pyc
-rw-r–r–. 2 root root 12410 Jan 16 14:36 dnsmadeeasy.pyo
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 exoscale
drwxr-xr-x. 2 root root 4096 Jan 30 03:17 f5
-rw-r–r–. 1 root root 13404 Jan 16 08:48 haproxy.py
-rw-r–r–. 2 root root 13504 Jan 16 14:36 haproxy.pyc
-rw-r–r–. 2 root root 13504 Jan 16 14:36 haproxy.pyo

The Ansible playbook below is to:

  • Create 2 web servers as F5 LTM nodes;
  • Create a F5 LTM pool;
  • Add the 2 web servers as the member for F5 LTM pool which is created in Step 2;
  • Create a F5 LTM vIP and associate the F5 LTM pool created in Step 2 with this newly created vIP;

[dzhang@localhost f5]$ cat f5-v0.4.yml

– name: f5 config
hosts: lb.davidwzhang.com
connection: local

tasks:
– name: create a pool
bigip_pool:
lb_method: “ratio_member”
name: “web”
password: “password”
server: “10.1.1.122”
user: “admin”
validate_certs: “no”
monitors:
– /Common/http
delegate_to: localhost
– name: create a node1
bigip_node:
host: “192.168.72.168”
name: “node-1”
password: “password”
server: “10.1.1.122”
user: “admin”
monitors:
– /Common/icmp
validate_certs: “no”
delegate_to: localhost
– name: create a node2
bigip_node:
host: “192.168.72.128”
name: “node-2”
password: “password”
server: “10.1.1.122”
user: “admin”
monitors:
– /Common/icmp
validate_certs: “no”
delegate_to: localhost
– name: add a node to pool
bigip_pool_member:
description: “webservers”
host: “{{item.host}}”
name: “{{item.name}}”
password: “password”
pool: “web”
port: “80”
connection_limit: “0”
monitor_state: “enabled”
server: “10.1.1.122”
user: “admin”
validate_certs: “no”
delegate_to: localhost
with_items:
– host: “192.168.72.168”
name: “node-1”
– host: “192.168.72.128”
name: “node-2”
– name: create a VIP
bigip_virtual_server:
description: “ansible-vip”
destination: “192.168.72.199”
name: “ansible-vip”
pool: “web”
port: “80”
snat: “Automap”
password: “password”
server: “10.1.1.122”
user: “admin”
all_profiles:
– “http”
validate_certs: “no”
delegate_to: localhost

Run the playbook:

[dzhang@localhost f5]$ ansible-playbook f5-v0.4.yml
/usr/lib64/python2.7/site-packages/cffi/model.py:525: UserWarning: ‘point_conversion_form_t’ has no values explicitly defined; guessing that it is equivalent to ‘unsigned int’
% self._get_c_name())

PLAY [f5 config] ***************************************************************

TASK [setup] *******************************************************************
ok: [lb.davidwzhang.com]

TASK [create a pool] ***********************************************************
ok: [lb.davidwzhang.com -> localhost]

TASK [create a node1] **********************************************************
ok: [lb.davidwzhang.com -> localhost]

TASK [create a node2] **********************************************************
ok: [lb.davidwzhang.com -> localhost]

TASK [add a node to pool] ******************************************************
ok: [lb.davidwzhang.com -> localhost] => (item={u’host’: u’192.168.72.168′, u’name’: u’node-1′})
ok: [lb.davidwzhang.com -> localhost] => (item={u’host’: u’192.168.72.128′, u’name’: u’node-2′})

TASK [create a VIP] ************************************************************
ok: [lb.davidwzhang.com -> localhost]

PLAY RECAP *********************************************************************
lb.davidwzhang.com : ok=6 changed=0 unreachable=0 failed=0

In F5 LTM, you can verify the configuration:

LTM Node:

NodeList

LTM Pool

Pool1

Pool2

LTM vIP

vIP

Install Ansible on Centos7

Installing Ansible on Centos 7 is quite straightforward. Firstly, make sure that you have epel repository installed. If not, run the command below to install EPEL repository.

yum install epel-release

Another prerequisite to install and run Ansible is Python. If you have not got Python installed, install Python before installing Ansible.

Then you can begin to install ansible.

[root@localhost /]# yum install ansible
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: centos.mirror.serversaustralia.com.au
* epel: mirror.intergrid.com.au
* extras: mirror.ventraip.net.au
* updates: centos.mirror.serversaustralia.com.au
Resolving Dependencies
–> Running transaction check
—> Package ansible.noarch 0:2.2.1.0-1.el7 will be installed
–> Processing Dependency: sshpass for package: ansible-2.2.1.0-1.el7.noarch
–> Processing Dependency: python-paramiko for package: ansible-2.2.1.0-1.el7.noarch
–> Processing Dependency: python-keyczar for package: ansible-2.2.1.0-1.el7.noarch
–> Processing Dependency: python-jinja2 for package: ansible-2.2.1.0-1.el7.noarch
–> Processing Dependency: python-httplib2 for package: ansible-2.2.1.0-1.el7.noarch
–> Processing Dependency: PyYAML for package: ansible-2.2.1.0-1.el7.noarch
–> Running transaction check
—> Package PyYAML.x86_64 0:3.10-11.el7 will be installed
–> Processing Dependency: libyaml-0.so.2()(64bit) for package: PyYAML-3.10-11.el7.x86_64
—> Package python-httplib2.noarch 0:0.7.7-3.el7 will be installed
—> Package python-jinja2.noarch 0:2.7.2-2.el7 will be installed
–> Processing Dependency: python-babel >= 0.8 for package: python-jinja2-2.7.2-2.el7.noarch
–> Processing Dependency: python-markupsafe for package: python-jinja2-2.7.2-2.el7.noarch
—> Package python-keyczar.noarch 0:0.71c-2.el7 will be installed
–> Processing Dependency: python-crypto for package: python-keyczar-0.71c-2.el7.noarch
—> Package python2-paramiko.noarch 0:1.16.1-2.el7 will be installed
–> Processing Dependency: python2-ecdsa for package: python2-paramiko-1.16.1-2.el7.noarch
—> Package sshpass.x86_64 0:1.06-1.el7 will be installed
–> Running transaction check
—> Package libyaml.x86_64 0:0.1.4-11.el7_0 will be installed
—> Package python-babel.noarch 0:0.9.6-8.el7 will be installed
—> Package python-markupsafe.x86_64 0:0.11-10.el7 will be installed
—> Package python2-crypto.x86_64 0:2.6.1-13.el7 will be installed
–> Processing Dependency: libtomcrypt.so.0()(64bit) for package: python2-crypto-2.6.1-13.el7.x86_64
—> Package python2-ecdsa.noarch 0:0.13-4.el7 will be installed
–> Running transaction check
—> Package libtomcrypt.x86_64 0:1.17-23.el7 will be installed
–> Processing Dependency: libtommath >= 0.42.0 for package: libtomcrypt-1.17-23.el7.x86_64
–> Processing Dependency: libtommath.so.0()(64bit) for package: libtomcrypt-1.17-23.el7.x86_64
–> Running transaction check
—> Package libtommath.x86_64 0:0.42.0-4.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

============================================================================================================================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================================================================================================================
Installing:
ansible noarch 2.2.1.0-1.el7 epel 4.6 M
Installing for dependencies:
PyYAML x86_64 3.10-11.el7 base 153 k
libtomcrypt x86_64 1.17-23.el7 epel 224 k
libtommath x86_64 0.42.0-4.el7 epel 35 k
libyaml x86_64 0.1.4-11.el7_0 base 55 k
python-babel noarch 0.9.6-8.el7 base 1.4 M
python-httplib2 noarch 0.7.7-3.el7 epel 70 k
python-jinja2 noarch 2.7.2-2.el7 base 515 k
python-keyczar noarch 0.71c-2.el7 epel 218 k
python-markupsafe x86_64 0.11-10.el7 base 25 k
python2-crypto x86_64 2.6.1-13.el7 epel 476 k
python2-ecdsa noarch 0.13-4.el7 epel 83 k
python2-paramiko noarch 1.16.1-2.el7 epel 258 k
sshpass x86_64 1.06-1.el7 epel 21 k

Transaction Summary
============================================================================================================================================================================================================================================
Install 1 Package (+13 Dependent packages)

Total download size: 8.0 M
Installed size: 36 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7/epel/packages/libtommath-0.42.0-4.el7.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID 352c64e5: NOKEY ] 455 kB/s | 1.0 MB 00:00:15 ETA
Public key for libtommath-0.42.0-4.el7.x86_64.rpm is not installed
(1/14): libtommath-0.42.0-4.el7.x86_64.rpm | 35 kB 00:00:01
(2/14): libyaml-0.1.4-11.el7_0.x86_64.rpm | 55 kB 00:00:01
(3/14): PyYAML-3.10-11.el7.x86_64.rpm | 153 kB 00:00:01
(4/14): python-httplib2-0.7.7-3.el7.noarch.rpm | 70 kB 00:00:00
(5/14): libtomcrypt-1.17-23.el7.x86_64.rpm | 224 kB 00:00:02
(6/14): python-markupsafe-0.11-10.el7.x86_64.rpm | 25 kB 00:00:00
(7/14): python2-crypto-2.6.1-13.el7.x86_64.rpm | 476 kB 00:00:01
(8/14): python2-ecdsa-0.13-4.el7.noarch.rpm | 83 kB 00:00:00
(9/14): python-keyczar-0.71c-2.el7.noarch.rpm | 218 kB 00:00:02
(10/14): python2-paramiko-1.16.1-2.el7.noarch.rpm | 258 kB 00:00:00
(11/14): sshpass-1.06-1.el7.x86_64.rpm | 21 kB 00:00:00
(12/14): python-jinja2-2.7.2-2.el7.noarch.rpm | 515 kB 00:00:04
(13/14): python-babel-0.9.6-8.el7.noarch.rpm | 1.4 MB 00:00:05
(14/14): ansible-2.2.1.0-1.el7.noarch.rpm | 4.6 MB 00:00:09
——————————————————————————————————————————————————————————————————————————————–
Total 880 kB/s | 8.0 MB 00:00:09
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
Importing GPG key 0x352C64E5:
Userid : “Fedora EPEL (7) <epel@fedoraproject.org>”
Fingerprint: 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5
Package : epel-release-7-9.noarch (@extras)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : sshpass-1.06-1.el7.x86_64 1/14
Installing : python-babel-0.9.6-8.el7.noarch 2/14
Installing : libtommath-0.42.0-4.el7.x86_64 3/14
Installing : libtomcrypt-1.17-23.el7.x86_64 4/14
Installing : python2-crypto-2.6.1-13.el7.x86_64 5/14
Installing : python-keyczar-0.71c-2.el7.noarch 6/14
Installing : python2-ecdsa-0.13-4.el7.noarch 7/14
Installing : python2-paramiko-1.16.1-2.el7.noarch 8/14
Installing : python-httplib2-0.7.7-3.el7.noarch 9/14
Installing : python-markupsafe-0.11-10.el7.x86_64 10/14
Installing : python-jinja2-2.7.2-2.el7.noarch 11/14
Installing : libyaml-0.1.4-11.el7_0.x86_64 12/14
Installing : PyYAML-3.10-11.el7.x86_64 13/14
Installing : ansible-2.2.1.0-1.el7.noarch 14/14
Verifying : python-keyczar-0.71c-2.el7.noarch 1/14
Verifying : libyaml-0.1.4-11.el7_0.x86_64 2/14
Verifying : python-jinja2-2.7.2-2.el7.noarch 3/14
Verifying : python-markupsafe-0.11-10.el7.x86_64 4/14
Verifying : python-httplib2-0.7.7-3.el7.noarch 5/14
Verifying : python2-ecdsa-0.13-4.el7.noarch 6/14
Verifying : libtomcrypt-1.17-23.el7.x86_64 7/14
Verifying : ansible-2.2.1.0-1.el7.noarch 8/14
Verifying : python2-paramiko-1.16.1-2.el7.noarch 9/14
Verifying : libtommath-0.42.0-4.el7.x86_64 10/14
Verifying : PyYAML-3.10-11.el7.x86_64 11/14
Verifying : python-babel-0.9.6-8.el7.noarch 12/14
Verifying : sshpass-1.06-1.el7.x86_64 13/14
Verifying : python2-crypto-2.6.1-13.el7.x86_64 14/14

Installed:
ansible.noarch 0:2.2.1.0-1.el7

Dependency Installed:
PyYAML.x86_64 0:3.10-11.el7 libtomcrypt.x86_64 0:1.17-23.el7 libtommath.x86_64 0:0.42.0-4.el7 libyaml.x86_64 0:0.1.4-11.el7_0 python-babel.noarch 0:0.9.6-8.el7 python-httplib2.noarch 0:0.7.7-3.el7
python-jinja2.noarch 0:2.7.2-2.el7 python-keyczar.noarch 0:0.71c-2.el7 python-markupsafe.x86_64 0:0.11-10.el7 python2-crypto.x86_64 0:2.6.1-13.el7 python2-ecdsa.noarch 0:0.13-4.el7 python2-paramiko.noarch 0:1.16.1-2.el7
sshpass.x86_64 0:1.06-1.el7

Complete!

Now you can verify your installation by the CLI below:
[root@localhost /]# ansible –version
ansible 2.2.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
[root@localhost /]#

Use Terraform to Set Up AWS Auto-Scaling Group with ELB

AWS auto-scaling group helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. By use of  auto-scaling policy, Auto Scaling group can launch or terminate instances as demand on your application increases or decreases.

Today, I will show you how to use Terraform template to setup an AWS auto-scaling group with ELB. My Terraform version is terraform_0.8.8.

My Terraform template includes:

  1. Create a aws_launch_configuration (webcluster) which defines how each EC2 instance will be built for an auto-scaling group;
  2. Create an AWS auto-scaling group (scalegroup);
  3. Create 1st AWS autoscaling policy (autopolicy) for auto-scaling group scale out;
  4. Create 2nd AWS autoscaling policy (autopolicy-down) for auto-scaling group scale in;
  5. Create 1st AWS CloudWatch alarm (cpualarm) to trigger auto-scaling group to scale out;
  6. Create 2nd AWS CloudWatch alarm (cpualarm-down) to trigger auto-scaling group to scale in;
  7. Create a security group (websg) to allow HTTP and management SSH connectivity;
  8. Create an Elastic load balancer with cookie session persistence and use this load balancer in front of auto-scaling group (scalegroup). The ELB will health check all EC2 instances in the auto-scaling group. If any EC2 instance fails the ELB health check, it won’t receive any incoming traffic. If the existing EC2 instances are overloaed (in our case CPU utilisation is over 60%),  the auto-scaling group will create more EC2 instance to handle the spike. On the contrary, the auto-scaling group will scale in when EC2 instance is idle (CPU utilisation is less than 10%).
  9. Create a SSH key pair and use for AWS auto-scaling group (scalegroup);
  10. Create output of ELB DNS;

Template

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_launch_configuration” “webcluster” {
image_id= “ami-4ba3a328”
instance_type = “t2.micro”
security_groups = [“${aws_security_group.websg.id}”]
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, I am WebServer” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

data “aws_availability_zones” “allzones” {}

resource “aws_autoscaling_group” “scalegroup” {
launch_configuration = “${aws_launch_configuration.webcluster.name}”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
min_size = 1
max_size = 4
enabled_metrics = [“GroupMinSize”, “GroupMaxSize”, “GroupDesiredCapacity”, “GroupInServiceInstances”, “GroupTotalInstances”]
metrics_granularity=”1Minute”
load_balancers= [“${aws_elb.elb1.id}”]
health_check_type=”ELB”
tag {
key = “Name”
value = “terraform-asg-example”
propagate_at_launch = true
}
}
resource “aws_autoscaling_policy” “autopolicy” {
name = “terraform-autoplicy”
scaling_adjustment = 1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm” {
alarm_name = “terraform-alarm”
comparison_operator = “GreaterThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “60”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy.arn}”]
}

#
resource “aws_autoscaling_policy” “autopolicy-down” {
name = “terraform-autoplicy-down”
scaling_adjustment = -1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm-down” {
alarm_name = “terraform-alarm-down”
comparison_operator = “LessThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “10”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy-down.arn}”]
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}

resource “aws_security_group” “elbsg” {
name = “security_group_for_elb”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

resource “aws_lb_cookie_stickiness_policy” “cookie_stickness” {
name = “cookiestickness”
load_balancer = “${aws_elb.elb1.id}”
lb_port = 80
cookie_expiration_period = 600
}

output “availabilityzones” {
value = [“${data.aws_availability_zones.allzones.names}”]
}

output “elb-dns” {
value = “${aws_elb.elb1.dns_name}”
}

Output:

LauchConfiguration

Auto_ScalingGroup_lauchconfiguration

Auto_ScalingGroup_lauchconfiguration_UserData

CloudWatchAlarm

CloudWatchAlarm_ScaleUpDown

Auto-scaling Policy

Auto_ScalingGroup_Policy_2

Scale Out

CloudWatchAlarm_ScaleUpDown_4

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_1

Scale In

CloudWatchAlarm_ScaleUpDown_5

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_2

Auto-scaling group

Auto_ScalingGroup_1

ELB

Auto_ScalingGroup_ELB

EC2 Instance

Auto_ScalingGroup_ELB_instances

NSX-v DLR OSPF Adjacencies Configuration Maximums

In one of NSX doc, the below is suggested around DLR OSPF Adjacencies configuration maximum:

OSPF Adjacencies per DLR 10

This maximum applies to NSX 6.1, 6.2 and 6.3.

In OSPF , OSPF optimizes the LSA flooding process on multiaccess network by using DR (designated rourer) and BDR (backup DR). Routers that are not DR or BDR are called DRother routers. All DRother routers only form full adjacency with DR and BDR. Among DRother routers, they will stay in 2Way state and forms OSPF neighborship but not full adjacency.

To clarify if  “adjacencies” in VMWare NSX configuration maximums doc means “full adjacency” or “neighbor/2way state”, I raise a SR with VMWare GSS. The response from VMware GSS is:

  • their “Adjacencies” mean “neighborship” not “full adjacency”.
  • the 2WAY state also will be included  in the configuration limit of the 10 OSPF adjacencies per DLR

AWS S3 Bucket for ELB Access Log with Terraform

To storage your AWS ELB access log to ASW S3. We use Terraform template below the below:

  1. Create a new S3 bucket called “elb-log.davidwzhang.com”
  2. Define a bucket policy which grant Elastic Load Balancing access to the newly created S3 bucket “elb-log.davidwzhang.com”. As you know,  each AWS region has its own account ID for Elastic Load Balancing. These account IDs can be found in the link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#d0e10520. variable “aws_elb_account_id. As my template for ap-southeast-2 region,  the account ID for  is 783225319266

Terraform Template:

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_s3_bucket” “elb” {
bucket = “elb-log.davidwzhang.com”
policy = <<EOF
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::783225319266:root”
},
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::elb-log.davidwzhang.com/*”
}
]
}
EOF
}

output “s3_bukcet_arn” {
value = “${aws_s3_bucket.elb.arn}”
}

To enable the access logging for ELB.  we need to update our ELB resource as the below:

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}

health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

instances = [“${aws_instance.web1.id}”,”${aws_instance.web2.id}”]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

Please note I changed access_logs interval to 5mins in the ELB resource defination so that we can verify the output of ELB access log quickly. In production environment, you possibly want change this interval longer, e.g. 120mins.

Output:

  • ELB configuration of access_log in AWS Console

elb_accesslog

  • S3 bucket for ELB access log

elb_accesslog_s3

  • S3 bucket prefix

elb_accesslog_s3_2

  • AWS Region

elb_accesslog_s3_3

  • ELB access-log file in AWS console

elb_accesslog_s3_6

  • ELB access-log content

elb_accesslog_s3_7