Blog
Terraform
AWS
Kubernetes
9
minutes

Terraform your EKS fleet - PART 2

This is the second part of our EKS with Terraform series. In Part 1, we talked about Infrastructure as Code and its benefits. We also mentioned the tools we will be using. This time we will actually start using Terraform to create a VPC and EC2 instance. Part 1: Introduction to Infrastructure as Code Part 2: First Terraform resources
Yann Irbah
Software Engineer
Summary
Twitter icon
linkedin icon

Prerequisites

Before we get started, let’s make sure you’ve got everything you need.

AWS account

First of all you obviously need an AWS account. We will assume that you do and that you have some familiarity with AWS for the rest of this article.

Terraform

Terraform is the IaC tool we will use to provision resources on your AWS account in a declarative way.

You can check the installations instructions for Terraform here: https://learn.hashicorp.com/tutorials/terraform/install-cli

At the time of writing, the Terraform version is v1.1.7

Terragrunt

Terraform is a great tool, but it lacks a few features. Terragrunt is a wrapper around Terraform, extending it with some interesting functionality:

  • Helps to keep your code and arguments DRY (https://en.wikipedia.org/wiki/Don%27t_repeat_yourself).
  • Allows you to execute commands on multiple modules at once.
  • Adds before and after hooks.

It will be helpful once we start managing several clusters, to reuse our modules and separate our states. More on this later.

Terraforming our first VPC

To get started with Terraform we’ll begin with the creation of a VPC. A VPC (Virtual Private Cloud) is the basic unit we need to create in AWS to provision our resources in. You should have a default one in your account already, but we want each cluster to live in its own VPC.

Create a new directory, and create an infrastructure sub-directory in it. We’ll explain why we won’t work in the root folder later.

To interact with cloud resources, Terraform works with providers. A provider is a wrapper around a cloud API. Obviously, there is a provider for AWS available: https://registry.terraform.io/providers/hashicorp/aws/4.8.0.

We could use the provider directly to provision our VPC. But we would need a very good knowledge of AWS and declare a lot of things. Fortunately, the community provides modules for almost everything we need.

Basically, a Terraform module is a set of bundled resources that can be reused. Modules can be customized through variables. We’ll create our own modules later, but let’s use a community one straight away.

Your go-to place for providers and modules is the Terraform Registry: https://registry.terraform.io.

It’s a place where all Terraform resources are hosted, and you can refer to them directly in your code (think about it like NPM, Rubygems …)

Go to the registry and type AWS VPC in the search field. There will be two parts in the dropdown: providers on the top and modules at the bottom. Click on the first entry in the bottom part: terraform-aws-modules/vpc.

This module is provided by the Terraform community. There are other ones available from different sources but we’ll pick this one. You’re free to explore alternative options.

On the module page, you see a usage example and the reference documentation of the module. Notable parts are:

  • Inputs: Corresponds to the input values you can pass when using the module to customize the created resources.
  • Outputs: Values you can get once your resources are created. Useful for using these values in other modules.

Let’s check if everything is fine by using the example provided. In your infrastructure directory, create a main.tf file and copy / paste the example in it:

module "vpc" {
source = "terraform-aws-modules/vpc/aws"

name = "my-vpc"
cidr = "10.0.0.0/16"

azs = ["eu-west-1a", "eu-west-1b", "eu-west-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]

enable_nat_gateway = true
enable_vpn_gateway = true

tags = {
Terraform = "true"
Environment = "dev"
}
}

Don’t worry about the values yet. We’ll explain them later.

Now run:

terraform init

This command will download the modules and providers required in a .terraform directory and create a .terraform.lock.hcl file to make sure you are always using the same version of the module unless updating it explicitly. It avoids failures if a new version is introduced with breaking changes. (We could also have pinned the version of the module with a version directive).

We could try to provision our resources right away, but a good idea is to first get an idea of what Terraform is planning to do. Run the following command:

terraform plan

You will get an output describing all the actions Terraform is going to perform. A + indicates the resource will be created, a - that it will be destroyed, and a ~ that it will be modified.

Obviously, since we didn’t create our VPC yet, we only get additions. Notice that with our simple VPC declaration, a lot of resources will be created. That’s the advantage of using a community module.

Now let’s actually apply our changes with:

terraform apply

You will get an output detailing the operations Terraform is performing. Once it’s done you’ll see something like:

Apply complete! Resources: 29 added, 0 changed, 0 destroyed.

Go to your AWS console, make sure you are in the eu-west-1 region, go to the VPCs list and check that the my-vpc VPC is present.

Congratulations! You’ve just provisioned an AWS resource using Terraform.

If you’re wondering how Terraform got access to your AWS account, it just used the credentials of your AWS CLI. There are other ways to provide credentials explicitly that we’ll use later.

Ok, let’s try to rename our VPC. On line 4, change the value of the name input to something like my-renamed-vpc.

Run the plan command again. You will notice a number of changes ready to be applied. But how does Terraform know what changed so quickly?

If you check your infrastructure directory, you’ll notice that a new file appeared: terraform.tfstate. This file is what’s called the Terraform State. It contains the list and information about all the resources as they were after the last apply. If you were to make changes directly in the AWS console, Terraform would be able to detect and revert them to the expected state. This is very good because it allows us to always be sure that after an apply, we get exactly what we asked for.

You can check the list of resources currently in your state with the following command:

terraform state list

There are other state commands available, but you should make sure you know what you’re doing before using most of them.

Now you can apply your changes. We get the following output:

Apply complete! Resources: 0 added, 19 changed, 0 destroyed.

We see that only resources related to the name changed. Nothing was added nor destroyed. And if you check your AWS console, you’ll see that the VPC was indeed renamed.

Add an EC2 instance to our VPC

Before going all-in with a whole EKS cluster, let’s see how to add a single EC2 instance to our VPC.

We’ll use another module for that. We’ll use a slightly simplified version of the example for our purpose:

module "ec2_instance" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 3.0"

name = "my-ec2-instance"

ami = "ami-052f10f1c45aa2155"
instance_type = "t2.micro"
monitoring = false
subnet_id = "???"

tags = {
Terraform = "true"
Environment = "dev"
}
}

Note that this time we added the version, indicating that we want version 3.0 of the module onwards while sticking to the major version 3.

For the ami value, this corresponds to an Ubuntu 16.04. This ID is only available in the eu-west-3 region. If you are in a different one, use the following website to find an AMI available in your region.

If you pay close attention, you’ll notice that I put question marks for the subnet_id. How do we know the IDs of the subnets of our VPC?

First, let’s decide that we want to put our EC2 instance in a private subnet. We then need to get the ID of one of the private subnets that were created with our VPC.

Add the following bit of code to your main.tf state:

output "private_subnet_ids" {
value = module.vpc.private_subnets
}

This will allow us to print the private subnet ids as an output of our apply command. This output value is documented on the VPC module page.

If you run apply again, you will get an output like this:

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

private_subnet_id = [
"subnet-00a964428e1da10a4",
"subnet-02e3b921dd82d1546",
"subnet-0ff7b97e87fc93cf2",
]

We could now copy/paste one of those ids for the subnet_id value of our EC2 module. But it wouldn’t be very smart, since it would only work for this particular VPC instance, given our subnets don’t change.

But if we were able to get those values for our output instruction, we can also use them directly in our EC2 module. Change the subnet_id value to:

subnet_id = module.vpc.private_subnets[0]

We’re picking the first ID in the list (index 0) but we could have picked any of the other two.

Run init to install the EC2 module, then apply. After a while you will get the output indicating that your changes are applied:

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Check your list of EC2 instances in your AWS console, you will see my-ec2-instance running.

Congratulations, now you know how to use outputs from a module as inputs of another one.

Clean after you

Before we continue, let’s clean these resources, we won’t need them anymore. Cleaning after you is quite easy with Terraform. Just run the following command:

terraform destroy

Once the destruction is complete you’ll see:

Destroy complete! Resources: 30 destroyed.

If you check the terraform.tfstate file, you’ll notice it doesn’t contain any resources anymore. You can check your AWS console, the EC2 instance and the VPC have disappeared.

Using an IaC tool is a good way to make sure we’re not leaving unneeded resources behind.

Conclusion

This was quite easy, wasn’t it? Hopefully, you’re starting to see the benefits of Infrastructure as Code. Think about the number of clicks you would have needed to achieve the same thing through the AWS console. Or the headache if you had scripted it using the AWS API.

In the next part, we’ll start creating our own modules and use Terragrunt to help us make our code reusable.

Stay tuned!

Share on :
Twitter icon
linkedin icon
Ready to rethink the way you do DevOps?
Qovery is a DevOps automation platform that enables organizations to deliver faster and focus on creating great products.
Book a demo

Suggested articles

Kubernetes
3
 minutes
NGINX Ingress Controller End of Maintenance by March 2026

Kubernetes NGINX ingress maintainers have announced that the project will move into end-of-life mode and stop being actively maintained by March 2026. Parts of the NGINX Kubernetes ecosystem are already deprecated or archived.

Romaric Philogène
CEO & Co-founder
DevOps
 minutes
The 10 Best Octopus Deploy Alternatives for Modern DevOps

Explore the top 10 Octopus Deploy alternatives for modern DevOps. Find the best GitOps and cloud-native Kubernetes delivery platforms.

Mélanie Dallé
Senior Marketing Manager
AWS
Cloud
Business
8
 minutes
6 Best AWS Deployment Options to Consider

Deploying on AWS efficiently is key. See the updated guide on the best AWS deployment options, covering new features and services.

Morgan Perry
Co-founder
Cloud
Kubernetes
 minutes
The High Cost of Vendor Lock-In in Cloud Computing and How to Avoid it

Cloud vendor lock-in threatens agility and raises costs. Discover the high price of proprietary services, egress fees, and technical entrenchment, plus the strategic roadmap to escape. Learn how embracing open standards, Kubernetes, and an exit strategy from day one ensures long-term flexibility and control.

Mélanie Dallé
Senior Marketing Manager
DevOps
 minutes
The Top 10 Porter Alternatives: Finding a More Flexible DevOps Platform

Looking for a Porter alternative? Discover why Qovery stands out as the #1 choice. Compare features, pros, and cons of the top 10 platforms to simplify your deployment strategy and empower your team.

Mélanie Dallé
Senior Marketing Manager
AWS
Deployment
 minutes
AWS App Runner Alternatives: Top 10 Choices for Effortless Container Deployment

AWS App Runner limits control and locks you into AWS. See the top 10 alternatives, including Qovery, to gain crucial customization, cost efficiency, and multi-cloud flexibility for containerized application deployment.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
Kubernetes Management: Best Practices & Tools for Managing Clusters and Optimizing Costs

Master Kubernetes management and cut costs with essential best practices and tools. Learn about security, reliability, autoscaling, GitOps, and FinOps to simplify cluster operations and optimize cloud spending.

Mélanie Dallé
Senior Marketing Manager
AWS
GCP
Azure
Cloud
Business
10
 minutes
10 Best AWS Elastic Beanstalk Alternatives

AWS Elastic Beanstalk is often rigid and slow. This guide details the top 10 Elastic Beanstalk alternatives—including Heroku, Azure App Service, and Qovery—comparing the pros, cons, and ideal use cases for achieving superior flexibility, faster deployments, and better cost control.

Morgan Perry
Co-founder

It’s time to rethink
the way you do DevOps

Say goodbye to DevOps overhead. Qovery makes infrastructure effortless, giving you full control without the trouble.