Use Terraform to Set Up AWS Auto-Scaling Group with ELB

AWS auto-scaling group helps you ensure that you have the correct number of Amazon EC2 instances available to handle the load for your application. By use of  auto-scaling policy, Auto Scaling group can launch or terminate instances as demand on your application increases or decreases.

Today, I will show you how to use Terraform template to setup an AWS auto-scaling group with ELB. My Terraform version is terraform_0.8.8.

My Terraform template includes:

  1. Create a aws_launch_configuration (webcluster) which defines how each EC2 instance will be built for an auto-scaling group;
  2. Create an AWS auto-scaling group (scalegroup);
  3. Create 1st AWS autoscaling policy (autopolicy) for auto-scaling group scale out;
  4. Create 2nd AWS autoscaling policy (autopolicy-down) for auto-scaling group scale in;
  5. Create 1st AWS CloudWatch alarm (cpualarm) to trigger auto-scaling group to scale out;
  6. Create 2nd AWS CloudWatch alarm (cpualarm-down) to trigger auto-scaling group to scale in;
  7. Create a security group (websg) to allow HTTP and management SSH connectivity;
  8. Create an Elastic load balancer with cookie session persistence and use this load balancer in front of auto-scaling group (scalegroup). The ELB will health check all EC2 instances in the auto-scaling group. If any EC2 instance fails the ELB health check, it won’t receive any incoming traffic. If the existing EC2 instances are overloaed (in our case CPU utilisation is over 60%),  the auto-scaling group will create more EC2 instance to handle the spike. On the contrary, the auto-scaling group will scale in when EC2 instance is idle (CPU utilisation is less than 10%).
  9. Create a SSH key pair and use for AWS auto-scaling group (scalegroup);
  10. Create output of ELB DNS;

Template

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_launch_configuration” “webcluster” {
image_id= “ami-4ba3a328”
instance_type = “t2.micro”
security_groups = [“${aws_security_group.websg.id}”]
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, I am WebServer” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

data “aws_availability_zones” “allzones” {}

resource “aws_autoscaling_group” “scalegroup” {
launch_configuration = “${aws_launch_configuration.webcluster.name}”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
min_size = 1
max_size = 4
enabled_metrics = [“GroupMinSize”, “GroupMaxSize”, “GroupDesiredCapacity”, “GroupInServiceInstances”, “GroupTotalInstances”]
metrics_granularity=”1Minute”
load_balancers= [“${aws_elb.elb1.id}”]
health_check_type=”ELB”
tag {
key = “Name”
value = “terraform-asg-example”
propagate_at_launch = true
}
}
resource “aws_autoscaling_policy” “autopolicy” {
name = “terraform-autoplicy”
scaling_adjustment = 1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm” {
alarm_name = “terraform-alarm”
comparison_operator = “GreaterThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “60”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy.arn}”]
}

#
resource “aws_autoscaling_policy” “autopolicy-down” {
name = “terraform-autoplicy-down”
scaling_adjustment = -1
adjustment_type = “ChangeInCapacity”
cooldown = 300
autoscaling_group_name = “${aws_autoscaling_group.scalegroup.name}”
}

resource “aws_cloudwatch_metric_alarm” “cpualarm-down” {
alarm_name = “terraform-alarm-down”
comparison_operator = “LessThanOrEqualToThreshold”
evaluation_periods = “2”
metric_name = “CPUUtilization”
namespace = “AWS/EC2”
period = “120”
statistic = “Average”
threshold = “10”

dimensions {
AutoScalingGroupName = “${aws_autoscaling_group.scalegroup.name}”
}

alarm_description = “This metric monitor EC2 instance cpu utilization”
alarm_actions = [“${aws_autoscaling_policy.autopolicy-down.arn}”]
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}

resource “aws_security_group” “elbsg” {
name = “security_group_for_elb”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

resource “aws_lb_cookie_stickiness_policy” “cookie_stickness” {
name = “cookiestickness”
load_balancer = “${aws_elb.elb1.id}”
lb_port = 80
cookie_expiration_period = 600
}

output “availabilityzones” {
value = [“${data.aws_availability_zones.allzones.names}”]
}

output “elb-dns” {
value = “${aws_elb.elb1.dns_name}”
}

Output:

LauchConfiguration

Auto_ScalingGroup_lauchconfiguration

Auto_ScalingGroup_lauchconfiguration_UserData

CloudWatchAlarm

CloudWatchAlarm_ScaleUpDown

Auto-scaling Policy

Auto_ScalingGroup_Policy_2

Scale Out

CloudWatchAlarm_ScaleUpDown_4

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_1

Scale In

CloudWatchAlarm_ScaleUpDown_5

Auto_ScalingGroup_ActivityHistory_ScaleUpDown_2

Auto-scaling group

Auto_ScalingGroup_1

ELB

Auto_ScalingGroup_ELB

EC2 Instance

Auto_ScalingGroup_ELB_instances

AWS S3 Bucket for ELB Access Log with Terraform

To storage your AWS ELB access log to ASW S3. We use Terraform template below the below:

  1. Create a new S3 bucket called “elb-log.davidwzhang.com”
  2. Define a bucket policy which grant Elastic Load Balancing access to the newly created S3 bucket “elb-log.davidwzhang.com”. As you know,  each AWS region has its own account ID for Elastic Load Balancing. These account IDs can be found in the link: http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-access-logs.html#d0e10520. variable “aws_elb_account_id. As my template for ap-southeast-2 region,  the account ID for  is 783225319266

Terraform Template:

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}

resource “aws_s3_bucket” “elb” {
bucket = “elb-log.davidwzhang.com”
policy = <<EOF
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“AWS”: “arn:aws:iam::783225319266:root”
},
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::elb-log.davidwzhang.com/*”
}
]
}
EOF
}

output “s3_bukcet_arn” {
value = “${aws_s3_bucket.elb.arn}”
}

To enable the access logging for ELB.  we need to update our ELB resource as the below:

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]
access_logs {
bucket = “elb-log.davidwzhang.com”
bucket_prefix = “elb”
interval = 5
}
listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}

health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

instances = [“${aws_instance.web1.id}”,”${aws_instance.web2.id}”]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

Please note I changed access_logs interval to 5mins in the ELB resource defination so that we can verify the output of ELB access log quickly. In production environment, you possibly want change this interval longer, e.g. 120mins.

Output:

  • ELB configuration of access_log in AWS Console

elb_accesslog

  • S3 bucket for ELB access log

elb_accesslog_s3

  • S3 bucket prefix

elb_accesslog_s3_2

  • AWS Region

elb_accesslog_s3_3

  • ELB access-log file in AWS console

elb_accesslog_s3_6

  • ELB access-log content

elb_accesslog_s3_7

AWS ELB with Terraform

Today, I will show you how to build a AWS ELB with Terraform.

My Terraform template includes:

  1. Create 2 EC2 instance as the backe-end member servers.  We will run basic web service (HTTP on TCP 80) on these 2 EC2 instances;
  2. Create a AWS Elastic LB who is listening on TCP 80 and perform health check to verify the status of backend web servers;
  3. Create a security group for ELB, which allows incoming HTTP session to ASW ELB and health check to back-end web servers;
  4. Create a security group on for back-end web server, which allows management SSH connection SSH (TCP22) and ELB health check;

My Terraform template is:

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}
resource “aws_instance” “web1” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
user_data = <<-EOF
#!/bin/bash
echo “hello, I am web1” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-web1”
}
}

resource “aws_instance” “web2” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, I am Web2” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-web2”
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}

data “aws_availability_zones” “allzones” {}
resource “aws_security_group” “elbsg” {
name = “security_group_for_elb”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

egress {
from_port = 0
to_port = 0
protocol = “-1”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_elb” “elb1” {
name = “terraform-elb”
availability_zones = [“${data.aws_availability_zones.allzones.names}”]
security_groups = [“${aws_security_group.elbsg.id}”]

listener {
instance_port = 80
instance_protocol = “http”
lb_port = 80
lb_protocol = “http”
}
health_check {
healthy_threshold = 2
unhealthy_threshold = 2
timeout = 3
target = “HTTP:80/”
interval = 30
}

instances = [“${aws_instance.web1.id}”,”${aws_instance.web2.id}”]
cross_zone_load_balancing = true
idle_timeout = 400
connection_draining = true
connection_draining_timeout = 400

tags {
Name = “terraform-elb”
}
}

output “availabilityzones” {
value = [“${data.aws_availability_zones.allzones.names}”]
}

output “elb-dns” {
value = “${aws_elb.elb1.dns_name}”
}

Output is as below:

  • ELB

Please note the DNS name for ELB. We will use this DNS name to reach this ELB.

elb_1

  • EC2 Instance

elb_instance_2

elb_instance

  • Health Check

elb_healthcheck

  • Listener

elb_listener

  • Security Group

elb_sg_1

Inbound Rules:

elb_sg_2

Outbound Rule:

elb_sg_3

  • ELB Tag

elb_tag

Load Balancing Function:

To verify the load balancing function, I add a CNAME for this ELB DNS name:

elb_cname

Now I use w3.davidwzhang.com to verify the load balancing works as expected.

Access to Web Server1:

LB_output_1

Access to Web Server2

LB_output_2

Create AWS VPC with Terraform

Today, I will show you how to use Terraform to create a customized VPC in AWS.

Using this Terraform template, I will create a VPC:

  • Name: terraform-vpc
  • IP block for this VPC: 10.0.0.0/16
  • Public Subnet: 10.0.1.0/24. (Note: VM instance in this subnet will have Internet access)
  • Private Subnet: 10.0.100.0/24

To verify the newly created VPC works as expected. you will seee that my template will create a test EC2 instance in public subnet (10.0.1.0/24) and upload a public key so that I SSH to this new EC2 instance via private key. To verify the new EC2 instance’s Internet connectivity , I include the following in the template as well:

  1. Enable a simple web service on EC2 instance;
  2. Create a security group which allows HTTP (TCP80) is created and associated with this EC2 instance;

 

provider “aws” {
region = “ap-southeast-2”
shared_credentials_file = “${pathexpand(“~/.aws/credentials”)}”
#shared_credentials_file = “/home/dzhang/.aws/credentials”
}
resource “aws_vpc” “terraform-vpc” {
cidr_block = “10.0.0.0/16”
instance_tenancy = “default”
enable_dns_support = “true”
enable_dns_hostnames = “true”
enable_classiclink = “false”
tags {
Name = “terraform”
}
}

resource “aws_subnet” “public-1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
cidr_block =”10.0.1.0/24″
map_public_ip_on_launch = “true”
availability_zone = “ap-southeast-2b”
tags {
Name = “public”
}
}

resource “aws_subnet” “private-1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
cidr_block =”10.0.100.0/24″
map_public_ip_on_launch = “false”
availability_zone = “ap-southeast-2b”
tags {
Name = “private”
}
}

resource “aws_internet_gateway” “gw” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
tags {
Name = “internet-gateway”
}
}

resource “aws_route_table” “rt1” {
vpc_id = “${aws_vpc.terraform-vpc.id}”
route {
cidr_block = “0.0.0.0/0”
gateway_id = “${aws_internet_gateway.gw.id}”
}
tags {
Name = “Default”
}
}

#resource “aws_main_route_table_association” “association-subnet” {
# vpc_id = “${aws_vpc.terraform-vpc.id}”
# route_table_id = “${aws_route_table.rt1.id}”
#}

resource “aws_route_table_association” “association-subnet” {
subnet_id = “${aws_subnet.public-1.id}”
route_table_id = “${aws_route_table.rt1.id}”
}

resource “aws_instance” “terraform_linux” {
ami = “ami-4ba3a328”
instance_type = “t2.micro”
vpc_security_group_ids = [“${aws_security_group.websg.id}”]
subnet_id = “${aws_subnet.public-1.id}”
key_name = “${aws_key_pair.myawskeypair.key_name}”
user_data = <<-EOF
#!/bin/bash
echo “hello, world” >index.html
nohup busybox httpd -f -p 80 &
EOF

lifecycle {
create_before_destroy = true
}

tags {
Name = “terraform-example”
}
}

resource “aws_key_pair” “myawskeypair” {
key_name = “myawskeypair”
public_key = “${file(“awskey.pub”)}”
}

resource “aws_security_group” “websg” {
name = “security_group_for_web_server”
vpc_id = “${aws_vpc.terraform-vpc.id}”
ingress {
from_port = 80
to_port = 80
protocol = “tcp”
cidr_blocks = [“0.0.0.0/0”]
}

lifecycle {
create_before_destroy = true
}
}

resource “aws_security_group_rule” “ssh” {
security_group_id = “${aws_security_group.websg.id}”
type = “ingress”
from_port = 22
to_port = 22
protocol = “tcp”
cidr_blocks = [“60.242.xxx.xxx/32”]
}
output “vpc-id” {
value = “${aws_vpc.terraform-vpc.id}”
}

output “vpc-publicsubnet” {
value = “${aws_subnet.public-1.cidr_block}”
}

output “vpc-publicsubnet-id” {
value = “${aws_subnet.public-1.id}”
}

output “vpc-privatesubnet” {
value = “${aws_subnet.private-1.cidr_block}”
}

output “vpc-privatesubnet-id” {
value = “${aws_subnet.private-1.id}”
}

output “public_ip” {
value = “${aws_instance.terraform_linux.public_ip}”
}

Below is outputs of the Terraform template.

Outputs:

public_ip = 13.54.172.172
vpc-id = vpc-c3a418a7
vpc-privatesubnet = 10.0.100.0/24
vpc-privatesubnet-id = subnet-89dbb9ff
vpc-publicsubnet = 10.0.1.0/24
vpc-publicsubnet-id = subnet-b7d8bac1

We can verify the setting of newly created VPC in AWS Console:

  • VPC

VPC_1

  • Subnets

VPC_subnet

  • Routing Table

VPC_routetable

  • EC2 Instance

VPC_EC2

Browse the WebPage on the test EC2 instance to verify our security group configuration

Webpage

SSH via private key

[dzhang@localhost vpc]$ ssh 13.54.172.172 -l ubuntu -i awskey
Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-110-generic x86_64)

* Documentation: https://help.ubuntu.com/

System information as of Sat Mar 25 09:56:52 UTC 2017

System load: 0.16 Memory usage: 5% Processes: 82
Usage of /: 10.1% of 7.74GB Swap usage: 0% Users logged in: 0

Graph this data and manage this system at:
https://landscape.canonical.com/

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

ubuntu@ip-10-0-1-15:~$ exit
logout

Terraform Remote State File on AWS S3

Every time you apply your Terraform template, Terraform will records the current infrastructure status in Terraform state file. By default, the state files are stored locally. Terraform will keep 2 state files for each Terraform template: one is for the current state (terraform.tfstate) and the other is for the second latest version of Terraform state (terraform.tfstate.backup).

In enterprise environment,  the common practise of managing Terraform state files is:

  1. Store the state files in a shared location;
  2. Store all versions of Terraform state file,  which will enable you to rollback to any older version instead of only the second latest version;
  3. Encryption of the state files;

Terraform has offered a built-in support for remote state storage.Currently, Terraform supports a few of remote storage including Aamzon S3, Azure, HashiCorp Consul  and Atlas.

Amazon S3 meet almost all of our requirements:

  1. Aamzon S3 supports encryption (AES-256);
  2. Amazon S3 will stores every version of the state files;
  3. When Terraform talks to ASW s3, TLS (Transport Layer Security) is used;

So here I will shows you how to use Amazon S3 as Terraform remote stage.

Step 1: creata a S3 bucket;

resource “aws_s3_bucket” “my-terraform-state” {
bucket = “my-terraform-state.davidwzhang.com”
versioning {
enabled = true
}

lifecycle {
prevent_destroy = true
}
}

output “s3_bukcet_arn” {
value = “${aws_s3_bucket.my-terraform-state.arn}”
}

AmazonS3

Step 2: configure your Terraform template to use S3 bucket

terraform remote config -backend=s3 -backend-config=”bucket=my-terraform-state.davidwzhang.com” -backend-config=”key=terraform/vpc.tfstate” -backend-config=”region=ap-southeast-2″ -backend-config=”encrypt=true”

AmazonS3-2

Now you can log in your AWS console and check the Terraform state file on ASW s3.

AmazonS3-3

Please note Terraform will still store the current and the second latest state file locally as normal. These state files are stored in the newly created sub-folder .terraform under the Terraform template folder.

[dzhang@localhost vpc]$ ls -al
total 20
-rw-rw-r–. 1 dzhang dzhang 1547 Mar 19 17:15 ~
drwxrwxr-x. 3 dzhang dzhang 74 Mar 20 22:00 .
drwxrwxr-x. 10 dzhang dzhang 4096 Mar 20 21:41 ..
drwxr-xr-x. 2 dzhang dzhang 61 Mar 19 17:10 .terraform
-rw-r–r–. 1 dzhang dzhang 3064 Mar 20 22:00 vpc.tf

[dzhang@localhost .terraform]$ ls -al

total 20
drwxr-xr-x. 2 dzhang dzhang 61 Mar 19 17:10 .
drwxrwxr-x. 3 dzhang dzhang 74 Mar 20 22:00 ..
-rw-rw-r–. 1 dzhang dzhang 750 Mar 24 21:06 terraform.tfstate
-rw-rw-r–. 1 dzhang dzhang 14213 Mar 24 21:05 terraform.tfstate.backup

Automate OpenStack Security Group with Terraform

Heat is the main project in the OpenStack Orchestration program. We can use heat to automate security group implementation. If you have NSXv plugin integrated with your OpenStack environment, you can use Heat template to automate your NSX DFW rules implementation as well. Here I will show you how to use Terraform to do the same magic: automate security group  deployment.

Below is my Terraform template of creating a security group and 5 rules within the newly created security group.

provider “openstack” {
user_name = “${var.openstack_user_name}”
password = “${var.openstack_password}”
tenant_name = “tenant1”
auth_url = “http://keystone.ops.com.au:5000/v3&#8221;
domain_name = “domain1”
}
resource “openstack_networking_secgroup_v2” “secgroup_2” {
name = “secgroup_2”
description = “Terraform security group”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
}
resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_1” {
direction = “egress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 22
port_range_max = 22
remote_ip_prefix = “10.41.129.12/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [“openstack_networking_secgroup_v2.secgroup_2”]

}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_2” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 443
port_range_max = 443
remote_ip_prefix = “10.41.129.12/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [
“openstack_networking_secgroup_v2.secgroup_2”,
“openstack_networking_secgroup_rule_v2.secgroup_rule_1”
]
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_3” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 443
port_range_max = 443
remote_ip_prefix = “10.41.129.11/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [
“openstack_networking_secgroup_v2.secgroup_2”,
“openstack_networking_secgroup_rule_v2.secgroup_rule_2”
]
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_4” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 8080
port_range_max = 8080
remote_ip_prefix = “10.41.129.11/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [
“openstack_networking_secgroup_v2.secgroup_2”,
“openstack_networking_secgroup_rule_v2.secgroup_rule_3”
]
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_5” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 22
port_range_max = 22
remote_ip_prefix = “10.41.129.11/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_2.id}”
tenant_id =”2b8d09cb778346a4ae70c16ee65a5c69″
depends_on = [
“openstack_networking_secgroup_v2.secgroup_2”,
“openstack_networking_secgroup_rule_v2.secgroup_rule_4”
]
}

Please make sure that you added the resource dependency for each firewall rule via”depends_on”.

If not, you will see erros like the below when you try to “terraform apply”. You will be able only to  add 1 rule when you run “terraform apply” once.

2017/03/06 19:47:46 [TRACE] Preserving existing state lineage “607d13a8-c268-498a-bbb4-07f98f0dd6b4”
Error applying plan:

1 error(s) occurred:

2017/03/06 19:47:46 [DEBUG] plugin: waiting for all plugin processes to complete…
* openstack_networking_secgroup_rule_v2.secgroup2_rule_2: Internal Server Error

Terraform does not automatically rollback in the face of errors.

The above issue is known Issue (Issue ID 7519) with Terraform. (Refer the link: https://github.com/hashicorp/terraform/issues/7519).

Unfortunately, the issue is still in version 0.8.7. The current solution is adding specify explicit dependencies when creating firewall rules.

Automate OpenStack with Terraform

Terraform can be used with Openstack for auto-provisioing.

Today, I will shows a working Terraform example in Openstack.

Firstly, define a Openstack provider for Terraform.

Provider:

provider “openstack” {
user_name = “${var.openstack_user_name}”
password = “${var.openstack_password}”
tenant_name = “project1”
auth_url = “http://keystone.openstack.com.au:5000/v3
domain_name = “DOMAINNAME”
}

Terraform currently support the following Openstack resource type: Compute, Network, Load Balancer, Firewall, Block Storage and Object Storage.

Here, we create a few of basic resources including Compute and Network (network (VXLAN here, but can be VLAN or any other kind of networks), subnet and security group)

Network:

Create a network named “tf-net2

resource “openstack_networking_network_v2” “tf-net2” {
region = “region1”
name = “tf-net2”
admin_state_up = “true”
}

create a subnet “tf_net_sub2” and associate with network tf-net2

resource “openstack_networking_subnet_v2” “tf_net_sub2” {
name = “tf_net_sub2”
region = “region1”
network_id = “${openstack_networking_network_v2.tf-net2.id}”
cidr = “172.16.50.0/24”
ip_version = 4
enable_dhcp = “false”
}

Security Group:

create a security group “secgroup_1” , then add 2 rules

resource “openstack_networking_secgroup_v2” “secgroup_1” {
name = “secgroup_1”
description = “Terraform security group”
}
resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_1” {
direction = “egress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 22
port_range_max = 22
remote_ip_prefix = “10.41.129.12/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_1.id}”
}

resource “openstack_networking_secgroup_rule_v2” “secgroup_rule_2” {
direction = “ingress”
ethertype = “IPv4”
protocol = “tcp”
port_range_min = 22
port_range_max = 22
remote_ip_prefix = “10.41.129.12/32”
security_group_id = “${openstack_networking_secgroup_v2.secgroup_1.id}”
}

Compute:

create 1 virtual instance using network tf-net2 and security group secgroup_1 which just created.

resource “openstack_compute_instance_v2” “vm_terraform” {

region = “region1”
availability_zone = “az1”
name = “nsx_terraform”
image_id = “b5d00e5c-ab30-4fb4-9ed0-1d99c7ff864b”
flavor_id = “10”
security_groups = [“${openstack_networking_secgroup_v2.secgroup_1.id}”]

metadata {
this = “that”
}

network {
name = “tf-net2”
}
stop_before_destroy = “true”
}

Result:

Openstack Network:

openstack-network

Security Group:

securitygroup

VM:

vm