Skip to content

Ultimate Guide on Creating Terraform Modules

Terraform modules are self-contained packages designed to be reusable across multiple projects. Modules allow you to abstract complex infrastructure configurations into a single unit of reusable code, making it easier to manage and maintain your infrastructure.

Ultimate Guide on Creating Terraform Modules
Ultimate Guide on Creating Terraform Modules

A module can contain multiple resources, data sources, variables, and outputs. Modules can be stored in various locations, including local directories, Git repositories, or public and private Terraform Module Registries.

Using modules can help you to:

  • Eliminate duplicate code and configuration
  • Encapsulate complex infrastructure logic.
  • Promote code reuse
  • Improve collaboration
  • Increase infrastructure deployment consistency and reliability.

Creating the module

The exact configuration language concepts we use to construct root modules are also used to define reusable modules. A frequently used module includes:

The given file structure shows the organization of files for modules having the module vpc and ec2 instance.

Screenshot of file modules
Screenshot of file modules

To define a module, create a new directory and place one or more required .tf files inside as you would for a root module. Terraform can load modules either from local relative paths or from remote repositories.

Read more about: How to Install and Upgrade the AWS CDK CLI

If you wish to reuse modules for different environments, place them in their version control repository.

We can call modules from other modules using a module block but keeping the module tree relatively flat and recommended. Follow module composition for information as an alternative to a deeply-nested tree of modules. The below diagram shows the nested module structure.

nested module structure diagram
Nested module structure diagram

Implement the below-mentioned files for creating a module for more clarity

  • variables.tf: Module inputs
  • outputs.tf: Outputs of the module and other modules that can use
  • README.md: Module documentation and use cases
  • CHANGELOG.md: Change logs and upgrade guides
  • examples: Use cases and examples for module usage
  • tests: for writing tests with terratest
  • submodule: breakdown the complex module
  • F_PLUGIN_CACHE_DIR="$HOME/.terraform.d/plugin-cache to reuse caches

The below diagram shows how modules eliminate code duplication when using different environments.

Code Duplication diagram
Code Duplication diagram

Module Isolation:

The diagram below shows how to differentiate modules and create them in isolation.

how to differentiate modules and create them in isolation
Diagram of how to differentiate modules and create them in isolation

While considering the isolation of the module:

  1. Only members of the relevant team who must have permission to create or modify network resources should be able to use this module.
  2. The resources must not be changed very often. Grouping modules protect from unnecessary churn and risk.

The network module returns outputs that other modules can use.

Things to consider while creating a module :

  • Using consistent file structure across your projects.
  • Using modules wherever possible.
  • Using consistent naming convention.
  • Using consistent format and style.
  • Using null attribute when the field is empty.
  • Lockdown terraforms and provider versions.
  • Separating modules and tests.
  • Breaking down complex modules into standalone submodules.
  • Hold your state file remotely, not locally.
  • Avoiding hardcoded variables.
  • Fewer resources in a project are easier and faster to work with.
  • Limit resources in the project to reduce the blast radius.
  • Testing your code.

Module Versioning

When multiple environments (staging and production) point to the same module folder, changing the modules folder will affect both settings on the next deployment.

Know more about: How to Host Static Websites on AWS S3?

This coupling makes it more challenging to test a change in one location without any chance of affecting another. To eliminate this problem, we can use versioned modules so that you can use one version in staging (e.g., v0.0.2) and a different version in production (e.g., v0.0.1):

diagram of versioned modules
Diagram of versioned modules

We can use a version control system like git to store your modules for module versioning. After creating modules and adding them to the Git repository, you can use the source parameter to reference the module from your local file path.

Explore more about: 3 Easy Steps to Migrate Gitlab PostgreSQL Database to Custom Location Using Ansible

This represents that your Terraform code is distributed to at least two repositories: a source repository and a modules repository.

The repository structure looks like this:

  • Modules: This repository contains reusable modules. This can be a blueprint that defines a specific part of your infrastructure.
  • Live: This repository contains your live infrastructure in each environment (stage, prod, etc.). This can be like houses you build from the blueprints in the modules repository.

The updated folder structure of Terraform looks like this:

structure of Terraform
structure of Terraform

To implement this folder structure, you’ll first need to move environment folders like the stage, prod, and global folders into a life folder and configure the live and modules folders as separate Git repositories.

Read more about: AWS Well-Architected Framework

You can now implement versioning to the repositories for different modules to use in your environments. Use semantic versioning to release the module, as shown below:

Adex

MAJOR: when you make incompatible API changes

MINOR: when you add functionality backward compatible

PATCH: when you make backward-compatible bug fixes

Automated Testing for Terraform (Terraform testing)

You can terraform validate and terraform plan to check your configuration. Still, if you’ve updated some Terraform configuration or a new module version, we can catch errors quickly before applying any changes to the production infrastructure. Let’s look at the different testing strategies.

Adex

We can implement testing manually or include it in git pre-commits or CI workflows.

Static analysis

Static analysis is a technique for analyzing code without actually executing it. It is an essential part of the testing process for Terraform code, as it allows you to detect potential issues in your code before you deploy it to your infrastructure.

Starting With AWS Managed Hosting Services

Some of the methods for implementing static code testing include :

Using Compiler/parser/interpreter

  • Using Terraform, validate the code.
  • Using VS Code terraform extensions that can check instant errors in the code.

Linter

  • conftest: Using contest, we are testing the Terraform plan JSON.
  • terraform_validate to validate your terraform code.
  • Flint: Flint finds possible errors (like invalid instance types) for Major Cloud providers (AWS/Azure/GCP), Warns about deprecated syntax and unused declarations, and Enforces best practices and naming conventions.

Dry run

  • This approach includes using terraform plan,hashicorp sentinel, and terraform-compliance.

Unit Tests

Infrastructure is about communicating with the outside world; isolating a unit would not be fruitful. So we can only test the infrastructure by deploying it to the natural environment. Therefore the test strategy is

  • Deploy real infrastructure
  • validate it works (e.g., via HTTP requests, API calls, ssh commands, etc.)
  • undeploy the infrastructure

Some tools that provide this functionality are terraces, kitchen-terraform, Inspec, server spec, and Goss.

Using the sandbox environment for testing is recommended use.

Integration tests

Integration testing is the method of testing that multiple units work together. Suppose we have two Terraform templates: one to create a virtual private cloud (VPC) and another to deploy an ec2 instance.

We could run each as a unit test, but eventually, we’ll want to successfully deploy the virtual machines against the virtual network’s deployed components.

Read more about: How To Insert Data Into a DynamoDB Table with Boto3

Unit testing would still be successful if either module deploys to a different region, but integration testing would fail.

Another critical difference is that unit testing can be run offline without deploying anything, whereas integration testing requires deploying resources to validate impact and success.

Resource cleanup

We can use Cloud nuke and AWS nuke for deleting all your resources, as both are open-source tools that allow you to delete all resources from your cloud provider account.

The below Table shows different Techniques of testing compared to strengths and weaknesses.

Adex

Using precommit for terraform

Running CLI commands by hand to verify your code before committing can be tedious, time-consuming, and repetitive. You might even forget to perform these, leading to failed Pull Request checks.

What is pre-commit

Pre-commit works like a git hook, causing your code to be checked when you commit. A configuration file can be used to specify which tools will be used to scan your local repository.

Explore more about: 7 Types of Security in Cloud Computing?

If a device detects mistakes, such as improper coding language settings, it will prevent the commit from occurring. This approach will improve security and best practices in coding. Follow the steps to configure pre-commit :

First, Install pre-commit according to your os.

The following hooks are standard for terraforming.

terraform_docs – For auto-generates readme files containing information on modules, providers, and resources.

Terraform fmt – structuring your config files for clarity.

Terraform validation – validating code to ensure the configuration is correct based on HCL.

Flint – To check for errors and encourage best practices.

Tec – Reviews the config files for any security concerns.

Installing these tools before proceeding

You must make sure you have pre-commit installed because it operates locally. Terraform fmt and validate are part of Terraform, so ensure this is installed.

Now, Create a pre-commit config file as shown in the example below :

repos:
  - repo: https://github.com/terraform-docs/terraform-docs
    rev: "v0.16.0"
    hooks:
      - id: terraform-docs-go
        args: ["markdown", "table", "--output-file", "README.md", "./"]
  - repo: https://github.com/antonbabenko/pre-commit-terraform
    rev: "v1.74.1"
    hooks:
      - id: terraform_fmt
      - id: terraform_tflint
      - id: terraform_validate
      - id: terraform_tfsec

This file is stored locally on the device; you can copy and paste it to any new repos.

It is necessary to name .pre-commit-config.yaml a repo.

Enable pre-commit on the repo.

When creating a config file, follow the below steps:

  1. Checkout your remote repo locally
  2. Copy the pre-commit-config.yaml file to the root folder of your repository
  3. In a terminal, run the following command from the root folder : 1pre-commit install
  4. Once you run the command, any changes you commit from now will trigger all hooks.

Skipping hooks on commit

When you do not want to run, you can run :

1git commit --no-verify -m "your commit message"

Here, –no-verify will prevent the hooks from triggering.

Share This article on:

Other Related Article:

7 Types of Security in Cloud Computing?

Amazon FSx for Lustre, Windows, and NetApp ONTAP

AWS Well-Architected Framework Security Pillar

How to Install and Upgrade the AWS CDK CLI

Ashok Pokhrel, Senior DevSecOps Engineer

Ashok has over five years of experience as Sr DevSecOps engineer in designing and developing cloud-native solutions with CI/CD and security tools aiming to improve software delivery posture, increase agility and effectiveness, manage and collaborate with the inter/intra and external teams to contribute and achieve the Company goals aligning with the vision and mission of the Company.