This is a getting started guide that extends Terraform's Infrastructure as Code (IaC) Build Tutorial to building a CD (Continuous Delivery) pipeline in order to update an infrastructure based on updates to the IaC code. I will go through the same tutorial in order for the reader to follow through without having to go back and forth across both guides. I will also be using an AMI (Amazon Machine Image) that was created as an output of the blog on the first part of this series.
The following infrastructure will be built based on this guide:
Prerequisites
- Terraform is installed
- AWS CLI is installed
- AWS Account and credentials that has access to create AWS resources
I previously wrote about the setup steps of these 3 prerequisites here.
Basics of Terraform
Terraform is a tool for creating Infrastructure as Code (IaC). IaC as a concept helps manage infrastructure with configuration files instead of using the AWS dashboard. The benefits of using IaC in managing your infrastructure are the following:
- Consistent infrastructure across environments
- Versioning of changes
- Reusable and shareable Infrastructure modules
A comprehensive getting-started guide for using Terraform with AWS can be found here.
Terraform Commands
In using Terraform, the four basic commands to keep in mind are the following:
- Initialize - will install plugins that Terraform needs to manage the infrastructure code.
- Plan - previews the changes that Terraform will make to match your configuration.
- Apply - make the planned changes.
- Destroy - destroy all infrastructure defined in code.
Building the Terraform Code, and Getting Familiar with Terraform Commands
First, create a main.tf
file with the following contents:
provider "aws" {
region = var.aws_region
}
terraform {
backend "s3" {
bucket = "bitscollective"
region = "us-east-1"
key = "awsEC2.tfstate"
}
}
From above code, we are defining aws
as the provider and defining a terraform state file named awsEC2.tfstate
that will be stored in the s3 bucket bitscollective
.
Then create a vars.tf
file with the following contents.
variable "aws_region" {
type = string
default = "us-east-1"
}
This simply creates a variable "aws_region" that can be referenced from this context. Now let's initialize the code.
Terraform Init
kayea@JARVIS MINGW64 ~/workspace/aws-terrraform-ec2app (main)
$ terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v5.12.0...
- Installed hashicorp/aws v5.12.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform Plan
From here, you'll notice that a .terraform directory is created with a terraform.tfstate file. A providers folder with a terraform executable is also downloaded to the folder.
Now, let's add more code to the main.tf file:
resource "aws_instance" "app_server" {
ami = "ami-830c94e3"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
Then from the command line, run terraform plan
kayea@JARVIS MINGW64 ~/workspace/aws-terrraform-ec2app (main)
$ terraform plan
Terraform used the selected providers to generate the following execution plan.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_instance.app_server will be created
+ resource "aws_instance" "app_server" {
+ ami = "ami-830c94e3"
+ arn = (known after apply)
+ associate_public_ip_address = (known after apply)
+ availability_zone = (known after apply)
+ cpu_core_count = (known after apply)
+ cpu_threads_per_core = (known after apply)
+ disable_api_stop = (known after apply)
+ disable_api_termination = (known after apply)
+ ebs_optimized = (known after apply)
+ get_password_data = false
+ host_id = (known after apply)
+ host_resource_group_arn = (known after apply)
+ iam_instance_profile = (known after apply)
+ id = (known after apply)
+ instance_initiated_shutdown_behavior = (known after apply)
+ instance_lifecycle = (known after apply)
+ instance_state = (known after apply)
+ instance_type = "t2.micro"
+ ipv6_address_count = (known after apply)
+ ipv6_addresses = (known after apply)
+ key_name = (known after apply)
+ monitoring = (known after apply)
+ outpost_arn = (known after apply)
+ password_data = (known after apply)
+ placement_group = (known after apply)
+ placement_partition_number = (known after apply)
+ primary_network_interface_id = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ secondary_private_ips = (known after apply)
+ security_groups = (known after apply)
+ source_dest_check = true
+ spot_instance_request_id = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "ExampleAppServerInstance"
}
+ tags_all = {
+ "Name" = "ExampleAppServerInstance"
}
+ tenancy = (known after apply)
+ user_data = (known after apply)
+ user_data_base64 = (known after apply)
+ user_data_replace_on_change = false
+ vpc_security_group_ids = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
───────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't
guarantee to take exactly these actions if you run "terraform apply" now.
Terraform Apply
This will show that the code is attempting to add a new resource. Now, run terraform apply
.
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Enter a value of yes
aws_instance.app_server: Creating...
aws_instance.app_server: Still creating... [10s elapsed]
aws_instance.app_server: Still creating... [20s elapsed]
aws_instance.app_server: Still creating... [30s elapsed]
aws_instance.app_server: Creation complete after 36s [id=i-0a5d2d786793f8176]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Go to EC2 in the AWS dashboard, and the created instance should now show up here.
In S3, you will see that the state file have been created. This file will keep track of any changes done to the code. Keeping this file in a remote location such as S3 ensures that any remote server will follow a single source of truth for any updates to the infrastructure code.
Terraform Destroy
To destroy all created resources, you can run terraform destroy
. Terraform destroy command ensures that all created resources are removed.
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Terraform destroy command ensures that all created resources are removed.
Enter a value: yes
aws_instance.app_server: Destroying... [id=i-0a5d2d786793f8176]
aws_instance.app_server: Still destroying... [id=i-0a5d2d786793f8176, 10s elapsed]
aws_instance.app_server: Still destroying... [id=i-0a5d2d786793f8176, 20s elapsed]
aws_instance.app_server: Still destroying... [id=i-0a5d2d786793f8176, 30s elapsed]
aws_instance.app_server: Still destroying... [id=i-0a5d2d786793f8176, 40s elapsed]
aws_instance.app_server: Destruction complete after 43s
Destroy complete! Resources: 1 destroyed.
Note that destroying the infrastructure does not remove the created terraform state file in the s3 bucket.
Terraform Modules
Let's continue with building the rest of the infrastructure with an introduction to Terraform modules. Terraform modules are nothing but putting together infrastructure code into logical reusable groups to allow re-use on other parts of the code.
Download the rest of the project from github.
Here, you'll see that I grouped the infrastructure code under one folder named ha-application. Ultimately, I wanted to be able to re-use this module when deploying the same infrastructure code across multiple environments (dev, qa, staging, prod).
Under this folder, there's a vars.tf file that would have default values, but can be overwritten when calling the module so that the values can be updated when deploying across environments.
Going back to the main code, I'm invoking the module and passing the required variables which is also referenced in the main code's vars.tf file.
module "application" {
source = "./modules/ha-application"
aws_region = var.aws_region
imageid = var.imageid
availability_zones = var.availability_zones
vpc_id = var.vpc_id
}
Deploying the code
Since the code have significantly changed and would require new plugins, we need to re-run terraform init
again. After this, we can go directly to terraform apply
to check out the resources being created.
kayea@JARVIS MINGW64 ~/Workspace/aws-terrraform-ec2app (main)
$ terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.application.aws_alb.application-lb will be created
+ resource "aws_alb" "application-lb" {
+ arn = (known after apply)
+ arn_suffix = (known after apply)
+ desync_mitigation_mode = "defensive"
+ dns_name = (known after apply)
+ drop_invalid_header_fields = false
+ enable_deletion_protection = false
+ enable_http2 = true
+ enable_tls_version_and_cipher_suite_headers = false
+ enable_waf_fail_open = false
+ enable_xff_client_port = false
+ id = (known after apply)
+ idle_timeout = 60
+ internal = (known after apply)
+ ip_address_type = (known after apply)
+ load_balancer_type = "application"
+ name = "application-lb"
+ preserve_host_header = false
+ security_groups = [
+ "sg-0c9234a6c5f976d95",
+ "sg-3ab9a217",
]
+ subnets = [
+ "subnet-51813e1c",
+ "subnet-d33cd6f2",
]
+ tags_all = (known after apply)
+ vpc_id = (known after apply)
+ xff_header_processing_mode = "append"
+ zone_id = (known after apply)
}
# module.application.aws_alb_listener.http-listener will be created
+ resource "aws_alb_listener" "http-listener" {
+ arn = (known after apply)
+ id = (known after apply)
+ load_balancer_arn = (known after apply)
+ port = 80
+ protocol = "HTTP"
+ ssl_policy = (known after apply)
+ tags_all = (known after apply)
+ default_action {
+ order = (known after apply)
+ target_group_arn = (known after apply)
+ type = "forward"
}
}
# module.application.aws_alb_target_group.http will be created
+ resource "aws_alb_target_group" "http" {
+ arn = (known after apply)
+ arn_suffix = (known after apply)
+ connection_termination = false
+ deregistration_delay = "300"
+ id = (known after apply)
+ ip_address_type = (known after apply)
+ lambda_multi_value_headers_enabled = false
+ load_balancing_algorithm_type = (known after apply)
+ load_balancing_cross_zone_enabled = (known after apply)
+ name = "application-tg"
+ port = 80
+ preserve_client_ip = (known after apply)
+ protocol = "HTTP"
+ protocol_version = (known after apply)
+ proxy_protocol_v2 = false
+ slow_start = 0
+ tags_all = (known after apply)
+ target_type = "instance"
+ vpc_id = "vpc-cf89b1b5"
+ health_check {
+ enabled = true
+ healthy_threshold = 2
+ interval = 15
+ matcher = "200"
+ path = "/"
+ port = "80"
+ protocol = "HTTP"
+ timeout = 5
+ unhealthy_threshold = 3
}
}
# module.application.aws_autoscaling_group.application-asg will be created
+ resource "aws_autoscaling_group" "application-asg" {
+ arn = (known after apply)
+ availability_zones = [
+ "us-east-1a",
+ "us-east-1d",
]
+ default_cooldown = (known after apply)
+ desired_capacity = 2
+ force_delete = false
+ force_delete_warm_pool = false
+ health_check_grace_period = 300
+ health_check_type = (known after apply)
+ id = (known after apply)
+ ignore_failed_scaling_activities = false
+ load_balancers = (known after apply)
+ max_size = 2
+ metrics_granularity = "1Minute"
+ min_size = 0
+ name = "application-asg"
+ name_prefix = (known after apply)
+ predicted_capacity = (known after apply)
+ protect_from_scale_in = false
+ service_linked_role_arn = (known after apply)
+ target_group_arns = (known after apply)
+ vpc_zone_identifier = (known after apply)
+ wait_for_capacity_timeout = "10m"
+ warm_pool_size = (known after apply)
+ launch_template {
+ id = (known after apply)
+ name = (known after apply)
+ version = "$Latest"
}
}
# module.application.aws_launch_template.application-template will be created
+ resource "aws_launch_template" "application-template" {
+ arn = (known after apply)
+ default_version = (known after apply)
+ id = (known after apply)
+ image_id = "ami-070d7322559e500ee"
+ instance_type = "t2.micro"
+ latest_version = (known after apply)
+ name = "application-template"
+ name_prefix = (known after apply)
+ tags_all = (known after apply)
+ update_default_version = true
+ vpc_security_group_ids = [
+ "sg-0c9234a6c5f976d95",
]
}
Plan: 5 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
After this, let's checkout the resources created.
Launch Template contains the EC2 configuration and references the ami we created previously.
Target Groups contains a reference to the EC2 instances were traffic will be forwarded to.
Auto-scaling Group contains the scaling configurations for high-availability
The Load Balancer will be forwarding traffic to the target defined.
Also notice that the application load balancer will have a DNS endpoint which we can now load in the browser (application-lb-1968702359.us-east-1.elb.amazonaws.com)
A better architecture will use https and have an SSL certificate.
As always, destroy the infrastructure after testing to ensure you are not being charged unnecessarily. And that's it! Stay tuned for my next update (where I'll be creating a pipeline in Github to run the Terraform commands on a runner instead of my machine)!
Top comments (2)
Nice article about Terraform and EC2 @kayea !
Thank you @jonbonso ! :D