Use Terraform to Set Up AWS Auto-Scaling Group with ELB

AWS auto-scaling group helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. By use of  auto-scaling policy, Auto Scaling group can launch or terminate instances as demand on your application increases or decreases.

Today, I will show you how to use Terraform template to setup an AWS auto-scaling group with ELB. My Terraform version is terraform_0.8.8.

My Terraform template includes:

  1. Create a aws_launch_configuration (webcluster) which defines how each EC2 instance will be built for an auto-scaling group;
  2. Create an AWS auto-scaling group (scalegroup);
  3. Create 1st AWS autoscaling policy (autopolicy) for auto-scaling group scale out;
  4. Create 2nd AWS autoscaling policy (autopolicy-down) for auto-scaling group scale in;
  5. Create 1st AWS CloudWatch alarm (cpualarm) to trigger auto-scaling group to scale out;
  6. Create 2nd AWS CloudWatch alarm (cpualarm-down) to trigger auto-scaling group to scale in;
  7. Create a security group (websg) to allow HTTP and management SSH connectivity;
  8. Create an Elastic load balancer with cookie session persistence and use this load balancer in front of auto-scaling group (scalegroup). The ELB will health check all EC2 instances in the auto-scaling group. If any EC2 instance fails the ELB health check, it won’t receive any incoming traffic. If the existing EC2 instances are overloaed (in our case CPU utilisation is over 60%),  the auto-scaling group will create more EC2 instance to handle the spike. On the contrary, the auto-scaling group will scale in when EC2 instance is idle (CPU utilisation is less than 10%).
  9. Create a SSH key pair and use for AWS auto-scaling group (scalegroup);
  10. Create output of ELB DNS;

Template

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_launch_configuration” “webcluster” {
image_id= “ami-4ba3a328”
instance_type = “t2.micro”
security_groups = [“${aws_security_group.websg.id}”]
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, I am WebServer” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

data “aws_availability_zones” “allzones” {}

resource “aws_autoscaling_group” “scalegroup” {
launch_configuration = “${aws_launch_configuration.webcluster.name}”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
min_size = 1
max_size = 4
enabled_metrics = [“GroupMinSize”, “GroupMaxSize”, “GroupDesiredCapacity”, “GroupInServiceInstances”, “GroupTotalInstances”]
metrics_granularity=”1Minute”
load_balancers= [“${aws_elb.elb1.id}”]
health_check_type=”ELB”
tag {
key = “Name”
value = “terraform-asg-example”
propagate_at_launch = true
}
}
resource “aws_autoscaling_policy” “autopolicy” {
name = “terraform-autoplicy”
scaling_adjustment = 1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm” {
alarm_name = “terraform-alarm”
comparison_operator = “GreaterThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “60”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy.arn}”]
}

#
resource “aws_autoscaling_policy” “autopolicy-down” {
name = “terraform-autoplicy-down”
scaling_adjustment = -1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm-down” {
alarm_name = “terraform-alarm-down”
comparison_operator = “LessThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “10”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy-down.arn}”]
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}

resource “aws_security_group” “elbsg” {
name = “security_group_for_elb”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

resource “aws_lb_cookie_stickiness_policy” “cookie_stickness” {
name = “cookiestickness”
load_balancer = “${aws_elb.elb1.id}”
lb_port = 80
cookie_expiration_period = 600
}

output “availabilityzones” {
value = [“${data.aws_availability_zones.allzones.names}”]
}

output “elb-dns” {
value = “${aws_elb.elb1.dns_name}”
}

Output:

LauchConfiguration

Auto_ScalingGroup_lauchconfiguration

Auto_ScalingGroup_lauchconfiguration_UserData

CloudWatchAlarm

CloudWatchAlarm_ScaleUpDown

Auto-scaling Policy

Auto_ScalingGroup_Policy_2

Scale Out

CloudWatchAlarm_ScaleUpDown_4

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_1

Scale In

CloudWatchAlarm_ScaleUpDown_5

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_2

Auto-scaling group

Auto_ScalingGroup_1

ELB

Auto_ScalingGroup_ELB

EC2 Instance

Auto_ScalingGroup_ELB_instances

AWS S3 Bucket for ELB Access Log with Terraform

To storage your AWS ELB access log to ASW S3. We use Terraform template below the below:

  1. Create a new S3 bucket called “elb-log.davidwzhang.com”
  2. Define a bucket policy which grant Elastic Load Balancing access to the newly created S3 bucket “elb-log.davidwzhang.com”. As you know,  each AWS region has its own account ID for Elastic Load Balancing. These account IDs can be found in the link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#d0e10520. variable “aws_elb_account_id. As my template for ap-southeast-2 region,  the account ID for  is 783225319266

Terraform Template:

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_s3_bucket” “elb” {
bucket = “elb-log.davidwzhang.com”
policy = <<EOF
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::783225319266:root”
},
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::elb-log.davidwzhang.com/*”
}
]
}
EOF
}

output “s3_bukcet_arn” {
value = “${aws_s3_bucket.elb.arn}”
}

To enable the access logging for ELB.  we need to update our ELB resource as the below:

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}

health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

instances = [“${aws_instance.web1.id}”,”${aws_instance.web2.id}”]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

Please note I changed access_logs interval to 5mins in the ELB resource defination so that we can verify the output of ELB access log quickly. In production environment, you possibly want change this interval longer, e.g. 120mins.

Output:

  • ELB configuration of access_log in AWS Console

elb_accesslog

  • S3 bucket for ELB access log

elb_accesslog_s3

  • S3 bucket prefix

elb_accesslog_s3_2

  • AWS Region

elb_accesslog_s3_3

  • ELB access-log file in AWS console

elb_accesslog_s3_6

  • ELB access-log content

elb_accesslog_s3_7

AWS ELB with Terraform

Today, I will show you how to build a AWS ELB with Terraform.

My Terraform template includes:

  1. Create 2 EC2 instance as the backe-end member servers.  We will run basic web service (HTTP on TCP 80) on these 2 EC2 instances;
  2. Create a AWS Elastic LB who is listening on TCP 80 and perform health check to verify the status of backend web servers;
  3. Create a security group for ELB, which allows incoming HTTP session to ASW ELB and health check to back-end web servers;
  4. Create a security group on for back-end web server, which allows management SSH connection SSH (TCP22) and ELB health check;

My Terraform template is:

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}
resource “aws_instance” “web1” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
user_data = <<-EOF
#!/bin/bash
echo “hello, I am web1” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-web1”
}
}

resource “aws_instance” “web2” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, I am Web2” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-web2”
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}

data “aws_availability_zones” “allzones” {}
resource “aws_security_group” “elbsg” {
name = “security_group_for_elb”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]

listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

instances = [“${aws_instance.web1.id}”,”${aws_instance.web2.id}”]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

output “availabilityzones” {
value = [“${data.aws_availability_zones.allzones.names}”]
}

output “elb-dns” {
value = “${aws_elb.elb1.dns_name}”
}

Output is as below:

  • ELB

Please note the DNS name for ELB. We will use this DNS name to reach this ELB.

elb_1

  • EC2 Instance

elb_instance_2

elb_instance

  • Health Check

elb_healthcheck

  • Listener

elb_listener

  • Security Group

elb_sg_1

Inbound Rules:

elb_sg_2

Outbound Rule:

elb_sg_3

  • ELB Tag

elb_tag

Load Balancing Function:

To verify the load balancing function, I add a CNAME for this ELB DNS name:

elb_cname

Now I use w3.davidwzhang.com to verify the load balancing works as expected.

Access to Web Server1:

LB_output_1

Access to Web Server2

LB_output_2

Create AWS VPC with Terraform

Today, I will show you how to use Terraform to create a customized VPC in AWS.

Using this Terraform template, I will create a VPC:

  • Name: terraform-vpc
  • IP block for this VPC: 10.0.0.0/16
  • Public Subnet: 10.0.1.0/24. (Note: VM instance in this subnet will have Internet access)
  • Private Subnet: 10.0.100.0/24

To verify the newly created VPC works as expected. my template will create a test EC2 instance in public subnet (10.0.1.0/24) and upload a public key so that I SSH to this new EC2 instance via private key. To verify the new EC2 instance’s Internet connectivity , I include the following in the template as well:

  1. Enable a simple web service on EC2 instance;
  2. Create a security group which allows HTTP (TCP80) is created and associated with this EC2 instance;

 

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}
resource “aws_vpc” “terraform-vpc” {
cidr_block = “10.0.0.0/16”
instance_tenancy = “default”
enable_dns_support = “true”
enable_dns_hostnames = “true”
enable_classiclink = “false”
tags {
Name = “terraform”
}
}

resource “aws_subnet” “public-1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
cidr_block =”10.0.1.0/24″
map_public_ip_on_launch = “true”
availability_zone = “ap-southeast-2b”
tags {
Name = “public”
}
}

resource “aws_subnet” “private-1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
cidr_block =”10.0.100.0/24″
map_public_ip_on_launch = “false”
availability_zone = “ap-southeast-2b”
tags {
Name = “private”
}
}

resource “aws_internet_gateway” “gw” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
tags {
Name = “internet-gateway”
}
}

resource “aws_route_table” “rt1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
route {
cidr_block = “0.0.0.0/0”
gateway_id = “${aws_internet_gateway.gw.id}”
}
tags {
Name = “Default”
}
}

#resource “aws_main_route_table_association” “association-subnet” {
# vpc_id = “${aws_vpc.terraform-vpc.id}”
# route_table_id = “${aws_route_table.rt1.id}”
#}

resource “aws_route_table_association” “association-subnet” {
subnet_id = “${aws_subnet.public-1.id}”
route_table_id = “${aws_route_table.rt1.id}”
}

resource “aws_instance” “terraform_linux” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
subnet_id = “${aws_subnet.public-1.id}”
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, world” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-example”
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
vpc_id = “${aws_vpc.terraform-vpc.id}”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}
output “vpc-id” {
value = “${aws_vpc.terraform-vpc.id}”
}

output “vpc-publicsubnet” {
value = “${aws_subnet.public-1.cidr_block}”
}

output “vpc-publicsubnet-id” {
value = “${aws_subnet.public-1.id}”
}

output “vpc-privatesubnet” {
value = “${aws_subnet.private-1.cidr_block}”
}

output “vpc-privatesubnet-id” {
value = “${aws_subnet.private-1.id}”
}

output “public_ip” {
value = “${aws_instance.terraform_linux.public_ip}”
}

Below is outputs of the Terraform template.

Outputs:

public_ip = 13.54.172.172
vpc-id = vpc-c3a418a7
vpc-privatesubnet = 10.0.100.0/24
vpc-privatesubnet-id = subnet-89dbb9ff
vpc-publicsubnet = 10.0.1.0/24
vpc-publicsubnet-id = subnet-b7d8bac1

We can verify the setting of newly created VPC in AWS Console:

  • VPC

VPC_1

  • Subnets

VPC_subnet

  • Routing Table

VPC_routetable

  • EC2 Instance

VPC_EC2

Browse the WebPage on the test EC2 instance to verify our security group configuration

Webpage

SSH via private key

[dzhang@localhost vpc]$ ssh 13.54.172.172 -l ubuntu -i awskey
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-110-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Sat Mar 25 09:56:52 UTC 2017

System load: 0.16 Memory usage: 5% Processes: 82
Usage of /: 10.1% of 7.74GB Swap usage: 0% Users logged in: 0

Graph this data and manage this system at:
https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@ip-10-0-1-15:~$ exit
logout

Create real-world like ASW security groups using Terraform

[dzhang@localhost terraform]$ cat instance.tf
provider “aws” {
access_key = “my_access_key”
secret_key = “my_secret_key”
region = “ap-southeast-2”
}
resource “aws_security_group” “app_server” {
name = “app_server”
description = “app server security group”
vpc_id = “vpc-d808xxxx”

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“6x.24x.5x.16x/32”]
}

tags {
Name = “APP”
}
}

resource “aws_security_group” “web_server” {
name = “web_server”
description = “Web Server security group”
vpc_id = “vpc-d808xxxx”

ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}
egress {
from_port = 1024
to_port = 65535
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

tags {
Name = “WEB”
}
}

resource “aws_security_group_rule” “internal-sg” {
security_group_id = “${aws_security_group.web_server.id}”
type = “egress”
from_port = 8301
to_port = 8301
protocol = “udp”
self = true
}

resource “aws_security_group_rule” “to_app” {
security_group_id = “${aws_security_group.web_server.id}”
type = “egress”
from_port = 8301
to_port = 8301
protocol = “tcp”
source_security_group_id = “${aws_security_group.app_server.id}”
}

resource “aws_security_group_rule” “from_web” {
security_group_id = “${aws_security_group.app_server.id}”
type = “ingress”
from_port = 8301
to_port = 8301
protocol = “tcp”
source_security_group_id = “${aws_security_group.web_server.id}”
}
[dzhang@localhost terraform]$

[dzhang@localhost terraform]$ terraform apply
aws_security_group.app_server: Creating…
description: “” => “app server security group”
egress.#: “” => “”
ingress.#: “” => “1”
ingress.625464618.cidr_blocks.#: “” => “1”
ingress.625464618.cidr_blocks.0: “” => “6x.24x.5x.16x/32”
ingress.625464618.from_port: “” => “22”
ingress.625464618.protocol: “” => “tcp”
ingress.625464618.security_groups.#: “” => “0”
ingress.625464618.self: “” => “false”
ingress.625464618.to_port: “” => “22”
name: “” => “app_server”
owner_id: “” => “”
tags.%: “” => “1”
tags.Name: “” => “APP”
vpc_id: “” => “vpc-d808xxxx”
aws_security_group.web_server: Creating…
description: “” => “Web Server security group”
egress.#: “” => “1”
egress.1543620397.cidr_blocks.#: “” => “1”
egress.1543620397.cidr_blocks.0: “” => “0.0.0.0/0”
egress.1543620397.from_port: “” => “1024”
egress.1543620397.prefix_list_ids.#: “” => “0”
egress.1543620397.protocol: “” => “tcp”
egress.1543620397.security_groups.#: “” => “0”
egress.1543620397.self: “” => “false”
egress.1543620397.to_port: “” => “65535”
ingress.#: “” => “1”
ingress.2214680975.cidr_blocks.#: “” => “1”
ingress.2214680975.cidr_blocks.0: “” => “0.0.0.0/0”
ingress.2214680975.from_port: “” => “80”
ingress.2214680975.protocol: “” => “tcp”
ingress.2214680975.security_groups.#: “” => “0”
ingress.2214680975.self: “” => “false”
ingress.2214680975.to_port: “” => “80”
name: “” => “web_server”
owner_id: “” => “”
tags.%: “” => “1”
tags.Name: “” => “WEB”
vpc_id: “” => “vpc-d808xxxx”
aws_security_group.app_server: Creation complete
aws_security_group.web_server: Creation complete
aws_security_group_rule.from_web: Creating…
from_port: “” => “8301”
protocol: “” => “tcp”
security_group_id: “” => “sg-ba43ecdd”
self: “” => “false”
source_security_group_id: “” => “sg-b943ecde”
to_port: “” => “8301”
type: “” => “ingress”
aws_security_group_rule.to_app: Creating…
from_port: “” => “8301”
protocol: “” => “tcp”
security_group_id: “” => “sg-b943ecde”
self: “” => “false”
source_security_group_id: “” => “sg-ba43ecdd”
to_port: “” => “8301”
type: “” => “egress”
aws_security_group_rule.internal-sg: Creating…
from_port: “” => “8301”
protocol: “” => “udp”
security_group_id: “” => “sg-b943ecde”
self: “” => “true”
source_security_group_id: “” => “”
to_port: “” => “8301”
type: “” => “egress”
aws_security_group_rule.from_web: Creation complete
aws_security_group_rule.internal-sg: Creation complete
aws_security_group_rule.to_app: Creation complete

Apply complete! Resources: 5 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate
[dzhang@localhost terraform]$ terraform destory
Usage: terraform [–version] [–help] [args]

The available commands for execution are listed below.
The most common, useful commands are shown first, followed by
less common or more advanced commands. If you’re just getting
started with Terraform, stick with the common commands. For the
other commands, please read the help and docs before usage.

Common commands:
apply Builds or changes infrastructure
console Interactive console for Terraform interpolations
destroy Destroy Terraform-managed infrastructure
fmt Rewrites config files to canonical format
get Download and install modules for the configuration
graph Create a visual graph of Terraform resources
import Import existing infrastructure into Terraform
init Initializes Terraform configuration from a module
output Read an output from a state file
plan Generate and show an execution plan
push Upload this Terraform module to Atlas to run
refresh Update local state file against real resources
remote Configure remote state storage
show Inspect Terraform state or plan
taint Manually mark a resource for recreation
untaint Manually unmark a resource as tainted
validate Validates the Terraform files
version Prints the Terraform version

All other commands:
debug Debug output management (experimental)
state Advanced state management
[dzhang@localhost terraform]$ terraform destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only ‘yes’ will be accepted to confirm.

Enter a value: yes

aws_security_group.app_server: Refreshing state… (ID: sg-ba43ecdd)
aws_security_group.web_server: Refreshing state… (ID: sg-b943ecde)
aws_security_group_rule.internal-sg: Refreshing state… (ID: sgrule-2476559081)
aws_security_group_rule.to_app: Refreshing state… (ID: sgrule-2890481209)
aws_security_group_rule.from_web: Refreshing state… (ID: sgrule-3247970428)
aws_security_group_rule.from_web: Destroying…
aws_security_group_rule.to_app: Destroying…
aws_security_group_rule.internal-sg: Destroying…
aws_security_group_rule.internal-sg: Destruction complete
aws_security_group_rule.from_web: Destruction complete
aws_security_group_rule.to_app: Destruction complete
aws_security_group.app_server: Destroying…
aws_security_group.web_server: Destroying…
aws_security_group.web_server: Destruction complete
aws_security_group.app_server: Destruction complete

Destroy complete! Resources: 5 destroyed.
[dzhang@localhost terraform]$

Create a AWS security group using Terraform

  • Create my Terraform file

[dzhang@localhost terraform]$ cat instance.tf
provider “aws” {
access_key = “my_access_key”
secret_key = “my_secret_key”
region = “ap-southeast-2”
}
resource “aws_security_group” “allow_ssh” {
name = “allow_all”
description = “Allow inbound SSH traffic from my IP”
vpc_id = “VPC-ID”

ingress {
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“6x.24x.5x.167/32”]
}

tags {
Name = “Allow SSH”
}
}

  • Terraform Plan

[dzhang@localhost terraform]$ terraform plan
Refreshing Terraform state in-memory prior to plan…
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn’t specify an “-out” parameter to save this plan, so when
“apply” is called, Terraform can’t guarantee this is what will execute.

+ aws_security_group.allow_ssh
description: “Allow inbound SSH traffic from my IP”
egress.#: “”
ingress.#: “1”
ingress.625464618.cidr_blocks.#: “1”
ingress.625464618.cidr_blocks.0: “6x.24x.5x.167/32”
ingress.625464618.from_port: “22”
ingress.625464618.protocol: “tcp”
ingress.625464618.security_groups.#: “0”
ingress.625464618.self: “false”
ingress.625464618.to_port: “22”
name: “allow_all”
owner_id: “”
tags.%: “1”
tags.Name: “Allow SSH”
vpc_id: “vpc-d8089xxx”
Plan: 1 to add, 0 to change, 0 to destroy.

  • Terraform Apply

[dzhang@localhost terraform]$ terraform apply
aws_security_group.allow_ssh: Creating…
description: “” => “Allow inbound SSH traffic from my IP”
egress.#: “” => “”
ingress.#: “” => “1”
ingress.625464618.cidr_blocks.#: “” => “1”
ingress.625464618.cidr_blocks.0: “” => “6x.24x.5x.167/32”
ingress.625464618.from_port: “” => “22”
ingress.625464618.protocol: “” => “tcp”
ingress.625464618.security_groups.#: “” => “0”
ingress.625464618.self: “” => “false”
ingress.625464618.to_port: “” => “22”
name: “” => “allow_all”
owner_id: “” => “”
tags.%: “” => “1”
tags.Name: “” => “Allow SSH”
vpc_id: “” => “vpc-d8089xxx”
aws_security_group.allow_ssh: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

  • Verify the security group created successfully in AWS console

Security Group

securitygroup

Firewall Rule

inbound-rule

Tag

2017-02-14_194039

  • terraform.tfstate

[dzhang@localhost terraform]$ cat terraform.tfstate
{
“version”: 3,
“terraform_version”: “0.8.6”,
“serial”: 3,
“lineage”: “7da04b67-0d9d-4337-80a7-9ffe05753f83”,
“modules”: [
{
“path”: [
“root”
],
“outputs”: {},
“resources”: {
“aws_security_group.allow_ssh”: {
“type”: “aws_security_group”,
“depends_on”: [],
“primary”: {
“id”: “sg-496ec32e“,
“attributes”: {
“description”: “Allow inbound SSH traffic from my IP”,
“egress.#”: “0”,
“id”: “sg-496ec32e”,
“ingress.#”: “1”,
“ingress.625464618.cidr_blocks.#”: “1”,
“ingress.625464618.cidr_blocks.0”: “6x.24x.5x.167/32”,
“ingress.625464618.from_port”: “22”,
“ingress.625464618.protocol”: “tcp”,
“ingress.625464618.security_groups.#”: “0”,
“ingress.625464618.self”: “false”,
“ingress.625464618.to_port”: “22”,
“name”: “allow_all”,
“owner_id”: “639399813107”,
“tags.%”: “1”,
“tags.Name”: “Allow SSH”,
“vpc_id”: “vpc-d8089xxx”
},
“meta”: {},
“tainted”: false
},
“deposed”: [],
“provider”: “”
}
},
“depends_on”: []
}
]
}

Install Python Paramiko on Centos 7

You need the following packages installed so that the Paramiko module installation can be completed successfully:

yum install python-devel

yum install libffi-devel

yum install -y openssl-devel

[root@localhost python2.7]# pip install paramiko
Collecting paramiko
Using cached paramiko-2.0.2-py2.py3-none-any.whl
Collecting cryptography>=1.1 (from paramiko)
Using cached cryptography-1.5.tar.gz
Requirement already satisfied (use –upgrade to upgrade): pyasn1>=0.1.7 in ./site-packages (from paramiko)
Requirement already satisfied (use –upgrade to upgrade): idna>=2.0 in ./site-packages (from cryptography>=1.1->paramiko)
Requirement already satisfied (use –upgrade to upgrade): six>=1.4.1 in ./site-packages (from cryptography>=1.1->paramiko)
Requirement already satisfied (use –upgrade to upgrade): setuptools>=11.3 in ./site-packages (from cryptography>=1.1->paramiko)
Requirement already satisfied (use –upgrade to upgrade): enum34 in ./site-packages (from cryptography>=1.1->paramiko)
Requirement already satisfied (use –upgrade to upgrade): ipaddress in ./site-packages (from cryptography>=1.1->paramiko)
Requirement already satisfied (use –upgrade to upgrade): cffi>=1.4.1 in /usr/lib64/python2.7/site-packages (from cryptography>=1.1->paramiko)
Requirement already satisfied (use –upgrade to upgrade): pycparser in ./site-packages (from cffi>=1.4.1->cryptography>=1.1->paramiko)
Building wheels for collected packages: cryptography
Running setup.py bdist_wheel for cryptography … done
Stored in directory: /root/.cache/pip/wheels/d4/98/43/a428a8aed7285f934d18efd787647455d7ef9a9dda81f22839
Successfully built cryptography
Installing collected packages: cryptography, paramiko
Found existing installation: cryptography 0.8.2
Uninstalling cryptography-0.8.2:
Successfully uninstalled cryptography-0.8.2
Successfully installed cryptography-1.5 paramiko-2.0.2
[root@localhost python2.7]#

Note: Paramiko is a Python (2.6+, 3.3+) implementation of the SSHv2 protocol providing both client and server functionality.

If you want to know more about Paramiko module. Please go to http://www.paramiko.org