Ultimate Guide on Creating Terraform Modules

Posted on Oct 11, 2023

Ultimate Guide on Creating Terraform Modules

Terraform modules are self-contained packages designed to be reusable across multiple projects. Modules allow you to abstract complex infrastructure configurations into a single unit of reusable code, making it easier to manage and maintain your infrastructure.

A module can contain multiple resources, data sources, variables, and outputs. Modules can be stored in various locations, including local directories, Git repositories, or public and private Terraform Module Registries.

Using modules can help you to:

  • Eliminate duplicate code and configuration
  • Encapsulate complex infrastructure logic.
  • Promote code reuse
  • Improve collaboration
  • Increase infrastructure deployment consistency and reliability.


Creating the module

The exact configuration language concepts we use to construct root modules are also used to define reusable modules. A frequently used module includes:

  • Input variables to accept values from the calling module.
  • Output values to return results to the calling module, which it can then use to populate arguments elsewhere.
  • Resources to define one or more infrastructure objects that the module will manage.

The given file structure shows the organization of files for modules having the module vpc and ec2 instance.


To define a module, create a new directory and place one or more required .tf files inside as you would for a root module. Terraform can load modules either from local relative paths or from remote repositories. If you wish to reuse modules for different environments, place them in their version control repository.

Read more about How to Install and Upgrade the AWS CDK CLI

We can call modules from other modules using a module block but keeping the module tree relatively flat and recommended. Follow module composition for information as an alternative to a deeply nested tree of modules, because this makes the individual modules easier to re-use in different combinations. The below diagram shows the nested module structure

Implement the below-mentioned files for creating a module for more clarity

  • variables.tf: Module inputs
  • outputs.tf: Outputs of the module and other modules that can use
  • README.md: Module documentation and use cases
  • CHANGELOG.md: Change logs and upgrade guides
  • examples: Use cases and examples for module usage
  • tests: for writing tests with terratest
  • submodule: breakdown the complex module
  • F_PLUGIN_CACHE_DIR="$HOME/.terraform.d/plugin-cache to reuse caches

Refactoring module resources

You can include refactoring blocks to record how resource names and module structure have changed from previous module versions. Terraform uses that information during planning to reinterpret existing objects as if they had been created at the corresponding new addresses, eliminating a separate workflow step to replace or migrate existing objects.


Refactoring

In shared modules and long-lived configurations, you may eventually outgrow your initial module structure and resource names. For example, you might decide that what was previously one child module makes more sense as two separate modules and move a subset of the existing resources to the new one.

Terraform compares the previous state with the new configuration, correlating by each module or resource's unique address. Therefore by default, Terraform understands moving or renaming an object as an intent to destroy the object at the old address and to create a new object at the new address.

When you add moved blocks in your configuration to record where you've historically moved or renamed an object, Terraform treats an existing object at the old address as if it now belongs to the new address.

moved Block Syntax

A moved block expects no labels and contains only from and to arguments:

  1. moved {
  2. from = aws_instance.a
  3. to   = aws_instance.b
  4. }

Terraform module folder structure



Best practices while creating a module

The diagram below shows how we can differentiate modules and create them as an isolation. 



The above diagram shows the network module that contains resources having high privilege and low volatility.

While considering the isolation of the module:

  1. Only members of the relevant team who must have permission to create or modify network resources should be able to use this module.
  2. The resources must not be changed very often. Grouping modules protect from unnecessary churn and risk.

The network module returns outputs that other modules can use. If VPC creation is multi-faceted, you could eventually split this module into different modules with different functions.

Things to consider while creating a module :

  • Using consistent file structure across your projects.
  • Using modules wherever possible.
  • Using consistent naming convention.
  • Using consistent format and style.
  • Using null attribute when the field is empty.
  • Lockdown terraforms and provider versions.
  • Separating modules and tests.
  • Breaking down complex modules into standalone submodules.
  • Hold your state file remotely, not locally.
  • Avoiding hardcoded variables.
  • Fewer resources in a project are easier and faster to work with.
  • Limit resources in the project to reduce the blast radius.
  • Testing your code.


Module Versioning

When multiple environments (staging and production) point to the same module folder, changing the modules folder will affect both settings on the next deployment.

Know more about How to Host Static Websites on AWS S3?

This coupling makes it more challenging to test a change in one location without any chance of affecting another. To eliminate this problem, we can use versioned modules so that you can use one version in staging (e.g., v0.0.2) and a different version in production (e.g., v0.0.1):


We can use a version control system like git to store your modules for module versioning. After creating modules and adding them to the Git repository, you can use the source parameter to reference the module from your local file path.

Explore more about 3 Easy Steps to Migrate Gitlab PostgreSQL Database to Custom Location Using Ansible

This represents that your Terraform code is distributed to at least two repositories: a source repository and a modules repository.

The repository structure looks like this:

  • Modules: This repository contains reusable modules. This can be a blueprint that defines a specific part of your infrastructure.
  • Live: This repository contains your live infrastructure in each environment (stage, prod, etc.). This can be like houses you build from the blueprints in the modules repository.

The updated folder structure of Terraform looks like this:



To implement this folder structure, you’ll first need to move environment folders like the stageprod, and global folders into a life folder and configure the live and modules folders as separate Git repositories. Here is an example of how to do that for the modules folder:

  1. $ cd modules
  2. $ git init
  3. $ git add .
  4. $ git commit -m "Initial commit of modules repo"
  5. $ git remote add origin "(URL OF REMOTE GIT REPOSITORY)"
  6. $ git push origin main

You can also add a tag to the modules repo to use as a version number. If you’re using GitHub, you can use the GitHub UI to create a release, which will create a tag under the hood.

If you’re not using GitHub, you can use the Git CLI:

  1. $ git tag -a "v0.0.1" -m "First release of webserver-cluster module"
  2. $ git push --follow-tags

Now you can use this versioned module in both staging and production by specifying a Git URL in the source parameter. Here is what that would look like in live/stage/services/webserver-cluster/main.tf if your modules repo was in the GitHub repo github.com/foo/modules (note that the double-slash in the following Git URL is required):

  1. module "webserver_cluster" {
  2.   source = "github.com/foo/modules//services/webserver-cluster?ref=v0.0.1"  cluster_name           = "webservers-stage"
  3.   db_remote_state_bucket = "(YOUR_BUCKET_NAME)"
  4.   db_remote_state_key    = "stage/data-stores/mysql/terraform.tfstate"  instance_type = "t2.micro"
  5.   min_size      = 2
  6.   max_size      = 2
  7. }

Read more about AWS Well-Architected Framework

You can now implement versioning to the repositories for different modules to use in your environments. Use semantic versioning to release the module, as shown below:

Adex

A particularly useful naming scheme for tags is semantic versioning. This is a versioning scheme of the format MAJOR.MINOR.PATCH (e.g., 1.0.4) with specific rules on when you should increment each part of the version number. In particular, you should increment the following:

MAJOR: when you make incompatible API changes

MINOR: when you add functionality backward compatible

PATCH: when you make backward-compatible bug fixes


Automated Testing for Terraform (Terraform testing)

You can terraform validate and terraform plan to check your configuration. Still, if you’ve updated some Terraform configuration or a new module version, we can catch errors quickly before applying any changes to the production infrastructure. Let’s look at the different testing modules versus configuration, and approaches to manage the cost of testing.


Best practices for Terraform testing include:

  • using remote state with versioning and locking
  • using workspaces for multiple environments
  • never saving TF state files in git
  • including tests for each module
  • adding an examples folder to use modules
  • including in git pre-commits
  • including in CI workflows.


Static Analysis

Static analysis is a technique for analyzing code without actually executing it. It is an essential part of the testing process for Terraform code, as it allows you to detect potential issues in your code before you deploy it to your infrastructure.

Starting With AWS Managed Hosting Services

Some of the methods for implementing static code testing include :

Using Compiler/parser/interpreter

Using Terraform, validate the code.

Using VS Code terraform extensions that can check instant errors in the code.

  1. Linter
    • conftest: Using contest, we are testing the Terraform plan JSON.
    • terraform_validate to validate your terraform code.
  2. Flint
    • Flint finds possible errors (like invalid instance types) for Major Cloud providers (AWS/Azure/GCP), Warns about deprecated syntax and unused declarations, and
    • Enforces best practices and naming conventions.
  3. Dry run
    • This approach includes using terraform plan,hashicorp sentinel, and terraform-compliance.

Unit Tests

Infrastructure is about communicating with the outside world; isolating a unit would not be fruitful. So we can only test the infrastructure by deploying it to the natural environment. Therefore the test strategy is

  • Deploy real infrastructure
  • validate it works (e.g., via HTTP requests, API calls, ssh commands, etc.)
  • undeploy the infrastructure

Some tools that provide this functionality are terraces, kitchen-terraform, Inspec, server spec, and Goss.

Using the sandbox environment for testing is recommended use.


Integration tests

Integration testing is the method of testing that multiple units work together. Suppose we have two Terraform templates: one to create a virtual private cloud (VPC) and another to deploy an ec2 instance. We could run each as a unit test, but eventually, we’ll want to successfully deploy the virtual machines against the virtual network’s deployed components. Unit testing would still be successful if either module deploys to a different region, but integration testing would fail.

Another critical difference is that unit testing can be run offline without deploying anything, whereas integration testing requires deploying resources to validate impact and success.


Resources cleanup

We can use Cloud nuke and AWS nuke for deleting all your resources, as both are open-source tools that allow you to delete all resources from your cloud provider account. They are typically used to clean up test or staging environments, as well as to delete resources that are no longer needed.

The below Table shows different Techniques of testing compared to strengths and weaknesses



Using precommit for terraform

Running CLI commands by hand to verify your code before committing can be tedious, time-consuming, and repetitive. You might even forget to perform these, leading to failed Pull Request checks.

What is pre-commit

Pre-commit works like a git hook, causing your code to be checked when you commit. A configuration file can be used to specify which tools will be used to scan your local repository. If a device detects mistakes, such as improper coding language settings, it will prevent the commit from occurring. This approach will improve security and best practices in coding. Follow the steps to configure pre-commit :

First, Install pre-commit according to your os.

Explore more about 7 Types of Security in Cloud Computing.

The following hooks are standard for terraforming.

  • terraform_docs – For auto-generates readme files containing information on modules, providers, and resources.
  • Terraform fmt – structuring your config files for clarity.
  • Terraform validation – validating code to ensure the configuration is correct based on HCL.
  • Flint – To check for errors and encourage best practices.
  • Tec – Reviews the config files for any security concerns.

Installing these tools before proceeding

You must make sure you have pre-commit installed because it operates locally. Terraform fmt and validate are part of Terraform, so ensure this is installed.


Create a pre-commit config file

repos:
  - repo: https://github.com/terraform-docs/terraform-docs
    rev: "v0.16.0"
    hooks:
      - id: terraform-docs-go
        args: ["markdown", "table", "--output-file", "README.md", "./"]
  - repo: https://github.com/antonbabenko/pre-commit-terraform
    rev: "v1.74.1"
    hooks:
      - id: terraform_fmt
      - id: terraform_tflint
      - id: terraform_validate
      - id: terraform_tfsec

This file is stored locally on the device; you can copy and paste it to any new repos.

It is necessary to name .pre-commit-config.yaml a repo.


Enable pre-commit on the repo.

When creating a config file, follow the below steps:

  1. Checkout your remote repo locally
  2. Copy the pre-commit-config.yaml file to the root folder of your repository
  3. In a terminal, run the following command from the root folder : 1pre-commit install
  4. Once you run the command, any changes you commit from now will trigger all hooks.


Skipping hooks on commit

When you do not want to run, you can run :

1git commit --no-verify -m "your commit message"

Here, –no-verify will prevent the hooks from triggering, allowing you to commit.

Ultimate Guide on Creating Terraform Modules
Tej pandey

Latest Blogs

New AWS Announcement for October 2023

New AWS Announcement for October 2023


New AWS Announcement for October 2023

Adex International

Nov 08, 2023

Sustainability in the AWS Well-Architected Framework: A Comprehensive Guide

Sustainability in the AWS Well-Architected Framework: A Comprehensive Guide


Sustainability in the AWS Well-Architected Framework: A Comprehensive Guide

Adex International

Oct 19, 2023

AWS New Announcement Sept 2023

AWS New Announcement Sept 2023


AWS New Announcement Sept 2023

Adex International

Oct 17, 2023

Migrate Gitlab PostgreSQL Database to Custom Location Using Ansible

Migrate Gitlab PostgreSQL Database to Custom Location Using Ansible


Migrate Gitlab PostgreSQL Database to Custom Location Using Ansible

Saugat Tiwari

Oct 11, 2023

Mastering DevOps: Your Ultimate Guide to DevOps Managed Services

Mastering DevOps: Your Ultimate Guide to DevOps Managed Services


Mastering DevOps: Your Ultimate Guide to DevOps Managed Services

Biswash Giri

Oct 11, 2023

Discover the Benefits of Security as a Service (SECaaS) for your Business

Discover the Benefits of Security as a Service (SECaaS) for your Business


Discover the Benefits of Security as a Service (SECaaS) for your Business

Saugat Tiwari

Oct 11, 2023

Port Forwarding Using AWS System Manager Session Manager

Port Forwarding Using AWS System Manager Session Manager


Port Forwarding Using AWS System Manager Session Manager

Saugat Tiwari

Oct 11, 2023

Maximizing Directory Services with LDAP: Creating OUs, Groups, and Users for Improved Authentication and Access Control

Maximizing Directory Services with LDAP: Creating OUs, Groups, and Users for Improved Authentication and Access Control


Maximizing Directory Services with LDAP: Creating OUs, Groups, and Users for Improved Authentication and Access Control

Biswash Giri

Oct 11, 2023

AWS Migration Tools: A Comprehensive Guide

AWS Migration Tools: A Comprehensive Guide

IntroductionAWS migration tools are a comprehensive set of services and utilities provided by Amazon...


AWS Migration Tools: A Comprehensive Guide

Binaya Puri

Oct 11, 2023

Difference Between AWS Cloudwatch and Cloudtrail

Difference Between AWS Cloudwatch and Cloudtrail

AWS CloudWatch and AWS CloudTrails are sometimes difficult to distinguish. This article seeks to d...


Difference Between AWS Cloudwatch and Cloudtrail

Sabin Joshi

Oct 11, 2023

New AWS Announcements for June 2023 - Adex

New AWS Announcements for June 2023 - Adex


New AWS Announcements for June 2023 - Adex

Ravi Gupta

Oct 11, 2023

Top 7 Applications Of Cloud Computing In Various Field

Top 7 Applications Of Cloud Computing In Various Field


Top 7 Applications Of Cloud Computing In Various Field

Susmita Karki Chhetri

Oct 11, 2023

Ingesting and Monitoring Custom Metrics in CloudWatch With AWS Lambda

Ingesting and Monitoring Custom Metrics in CloudWatch With AWS Lambda


Ingesting and Monitoring Custom Metrics in CloudWatch With AWS Lambda

Tej pandey

Oct 11, 2023

7 Types of Security in Cloud Computing?

7 Types of Security in Cloud Computing?


7 Types of Security in Cloud Computing?

Mukesh Awasthi

Oct 11, 2023

Cost-effective Use cases & Benefits of Amazon S3

Cost-effective Use cases & Benefits of Amazon S3


Cost-effective Use cases & Benefits of Amazon S3

Nischal Gautam

Oct 11, 2023

IT Outsourcing: Everything You Need To Know

IT Outsourcing: Everything You Need To Know

The world has changed, and as technology advances, so does the world of work. Gone are the day...


IT Outsourcing: Everything You Need To Know

Roshan Raman Giri

Oct 11, 2023

Getting Started with Amazon Redshift in 6 Simple Steps

Getting Started with Amazon Redshift in 6 Simple Steps


Getting Started with Amazon Redshift in 6 Simple Steps

Tej pandey

Oct 11, 2023

How to Host Static Websites on AWS S3?

How to Host Static Websites on AWS S3?

How to Host Static Websites on AWS S3? Hosting a Static Website on AWS S3 has a lot of benefits....


How to Host Static Websites on AWS S3?

Ravi Gupta

Oct 11, 2023

The Importance of Managed Cloud Security for Businesses

The Importance of Managed Cloud Security for Businesses


The Importance of Managed Cloud Security for Businesses

Roshan Raman Giri

Oct 11, 2023

How To Use Amazon S3 For Personal Backup?

How To Use Amazon S3 For Personal Backup?


How To Use Amazon S3 For Personal Backup?

Tej pandey

Oct 11, 2023

Major AWS Updates &Announcements of 2023 - March

Major AWS Updates &Announcements of 2023 - March


Major AWS Updates &Announcements of 2023 - March

Roshan Raman Giri

Oct 11, 2023

How To Insert Data Into a DynamoDB Table with Boto3

How To Insert Data Into a DynamoDB Table with Boto3

DynamoDB is used for many use cases, including web and mobile applications, gaming, ad tech,...


How To Insert Data Into a DynamoDB Table with Boto3

Binaya Puri

Oct 11, 2023

How to Install and Upgrade the AWS CDK CLI

How to Install and Upgrade the AWS CDK CLI


How to Install and Upgrade the AWS CDK CLI

Nischal Gautam

Oct 11, 2023

Ultimate Guide on Creating Terraform Modules

Ultimate Guide on Creating Terraform Modules


Ultimate Guide on Creating Terraform Modules

Tej pandey

Oct 11, 2023

What is serverless computing?

What is serverless computing?


What is serverless computing?

Tej pandey

Oct 11, 2023

AWS Well-Architected Framework Security Pillar

AWS Well-Architected Framework Security Pillar

The Amazon Well-Architected Framework is a set of recommendations and practice guidelines for develo...


AWS Well-Architected Framework Security Pillar

Binaya Puri

Oct 11, 2023

Amazon FSx for Lustre, Windows, and NetApp ONTAP

Amazon FSx for Lustre, Windows, and NetApp ONTAP

Amazon FSx for Lustre, Windows, and NetApp ONTAPAmazon FSx is known for its fully managed, hig...


Amazon FSx for Lustre, Windows, and NetApp ONTAP

Ravi Gupta

Oct 11, 2023

How to Choose the Right Cloud Service Provider?

How to Choose the Right Cloud Service Provider?


How to Choose the Right Cloud Service Provider?

Tej pandey

Oct 11, 2023

25 New AWS Services Updates from AWS Re:Invent 2022

25 New AWS Services Updates from AWS Re:Invent 2022


25 New AWS Services Updates from AWS Re:Invent 2022

Susmita Karki Chhetri

Oct 11, 2023

AWS Managed Hosting Services And Dedicated Hosting Benefits

AWS Managed Hosting Services And Dedicated Hosting Benefits


AWS Managed Hosting Services And Dedicated Hosting Benefits

Tej pandey

Oct 11, 2023

What is Serverless Security? Risk & Best Practices

What is Serverless Security? Risk & Best Practices

Serverless computing  is a rising topic right now in the cloud tech industry. As per a Datad...


What is Serverless Security? Risk & Best Practices

Anup Giri

Oct 11, 2023

Difference Between Cloud Computing and Cybersecurity

Difference Between Cloud Computing and Cybersecurity


Difference Between Cloud Computing and Cybersecurity

Mukesh Awasthi

Oct 11, 2023

DevOps for Developers: How It Helps Streamline the Development Process

DevOps for Developers: How It Helps Streamline the Development Process

As per a survey done by Puppet, firms with DevOps practice have increased recovery speeds by 24 ti...


DevOps for Developers: How It Helps Streamline the Development Process

Roshan Raman Giri

Oct 11, 2023

New AWS Announcements for August 2023

New AWS Announcements for August 2023


New AWS Announcements for August 2023

Rohan Jha

Oct 11, 2023

The FinOps Chronicles

The FinOps Chronicles


The FinOps Chronicles

Anup Giri

Oct 11, 2023

AWS Auto scale Instance-Based on RabbitMQ Custom Metrics

AWS Auto scale Instance-Based on RabbitMQ Custom Metrics


AWS Auto scale Instance-Based on RabbitMQ Custom Metrics

Anup Giri

Oct 11, 2023

Overcome Merge Hell with Trunk based development and Continuous Integration

Overcome Merge Hell with Trunk based development and Continuous Integration


Overcome Merge Hell with Trunk based development and Continuous Integration

Rohan Jha

Oct 11, 2023

What's the difference between CapEX Vs OpEX in Cloud Computing?

What's the difference between CapEX Vs OpEX in Cloud Computing?


What's the difference between CapEX Vs OpEX in Cloud Computing?

Tej pandey

Oct 11, 2023

How Does Your Organization Keep Cloud Costs Under Control?

How Does Your Organization Keep Cloud Costs Under Control?


How Does Your Organization Keep Cloud Costs Under Control?

Susmita Karki Chhetri

Oct 11, 2023

Microsoft Azure vs AWS vs Google Cloud Comparison

Microsoft Azure vs AWS vs Google Cloud Comparison


Microsoft Azure vs AWS vs Google Cloud Comparison

Mukesh Awasthi

Oct 11, 2023

What are the Benefits of Amazon S3 Glacier?

What are the Benefits of Amazon S3 Glacier?


What are the Benefits of Amazon S3 Glacier?

Anup Giri

Oct 11, 2023

Leverage Azure Migrate to Discover and Assess Your AWS Instances for Smooth Migration to Azure

Leverage Azure Migrate to Discover and Assess Your AWS Instances for Smooth Migration to Azure


Leverage Azure Migrate to Discover and Assess Your AWS Instances for Smooth Migration to Azure

Rohan Jha

Oct 11, 2023