Terraform by HashiCorp
In this article:
Terraform by HashiCorp#
General information#
Terraform is an advanced tool for automated cloud infrastructure management. It uses a simple and expressive language similar to plain English. The code in this language is written in a declarative manner: you describe what you want to get and do not think about how to get it.
Once you have written such a code, you can reuse it many times: enter a couple of short commands in the terminal. And every time you will get a predictable result: the requested number of VMs will be created in the cloud from the specified images, the required number of external IP addresses will be allocated, security groups will be configured, and all the other actions described in the code will be performed. Performing the same actions in the web interface will take more time, especially if you need to repeat them. In addition, when doing this manually, you face a dramatically higher risk of making a mistake and getting something different from what you planned, with much time being spent than trying to understand why it went wrong.
This approach to infrastructure deployment is called “Infrastructure as a Code” (IaaC) and allows you to:
use version control systems;
comment the code to document what it does;
test the code before using it in an actual infrastructure to identify possible negative consequences;
hand over the code to other developers to evaluate its quality and finally get the best solution.
Installation and configuration#
Note
The guide was written and tested using Terraform v1.0.8 for the rockitcloud v24.1.0 and AWS v3.63.0 providers, and the information below is relevant for these versions. We have frozen the provider version by embedding it in the configuration code to ensure stability and compatibility.
Terraform is distributed as an executable file with versions for Linux, Windows, macOS, etc. You can download the version you need from the official download page. If the official page is unavailable, download the installation package here. After downloading and extracting the archive, we recommend moving the extracted file to any folder specified in the current environment variable PATH or adding the target folder to this variable. For Linux, it can be /usr/local/bin/
, while for Windows — C:\Windows\system32
(OS administrator rights are required to access system folders). Thus, you will not have to specify the full path to the file each time.
Local mirrors for common providers#
Providers from K2 Cloud’s local mirrors are identical to those from the original repositories. You can use them to avoid installation problems.
Note
For up-to-date information, follow the link.
Provider |
Available versions |
Link to the mirror |
---|---|---|
AWS |
2.64.0 3.15.0 3.63.0 4.8.0 |
https://hc-releases.website.k2.cloud/terraform-provider-aws/ |
Kubernetes |
2.10.0 2.11.0 |
https://hc-releases.website.k2.cloud/terraform-provider-kubernetes/ |
Random |
3.3.2 |
https://hc-releases.website.k2.cloud/terraform-provider-random/ |
Template |
2.2.0 |
https://hc-releases.website.k2.cloud/terraform-provider-template/ |
TLS |
3.1.0 |
https://hc-releases.website.k2.cloud/terraform-provider-tls/ |
Provider by K2 Cloud#
Attention
Starting from version 24.1.0, the provider croccloud is released under the new name rockitcloud. Previous versions of the provider croccloud are available in the official Terraform registry.
To switch to the rockitcloud provider, use the following instructions.
Specify the new provider name and version in the configuration.
terraform { required_providers { aws = { source = "hc-registry.website.k2.cloud/c2devel/rockitcloud" version = "24.1.0" } } }
If the Terraform state file is not empty, replace the provider for the resources that are in the file.
terraform state replace-provider -state terraform.tfstate hc-registry.website.cloud.croc.ru/c2devel/croccloud hc-registry.website.k2.cloud/c2devel/rockitcloud
Initialize the Terraform configuration again.
terraform init
To work with K2 Cloud, you can use rockitcloud provider from C2Devel. It was created on the basis of AWS 4.14.0 provider and excludes Terraform resources not supported in K2 Cloud. At the same time, it has added functionality specific to K2 Cloud, such as st2
volumes.
For information about rockitcloud provider releases, visit the official repository.
The rockitcloud provider is included in the official Terraform register . The provider page offers documentation on resources supported in K2 Cloud.
Note
The documentation contains a unified list of resources with a prefix that is automatically generated for each resource and matches the rockitcloud name. For compatibility with configurations for the AWS provider, we retained the aws prefix in resource description and usage examples.
Resource list by category#
Category |
Terraform Resource |
Terraform Datasource |
---|---|---|
Auto Scaling |
||
Backup |
||
CloudWatch |
||
Direct Connect |
||
EBS (EC2) |
||
EC2 (Elastic Compute Cloud) |
||
EKS (Elastic Kubernetes) |
||
ELB (Elastic Load Balancing) |
||
IAM (Identity & Access Management) |
||
PaaS |
||
Route53 |
||
Transit Gateway |
||
S3 (Simple Storage) |
||
VPC (Virtual Private Cloud) |
||
VPN (Site-to-Site) |
Describing Terraform configuration#
The ready-to-use code described below is stored in our official repository terraform-examples on GitHub in the quick_start
folder. You can download and start using it right away with minimal edits. However, for a better code understanding, we recommend that you follow this guide’s steps and operations one by one.
Warning
When using Terraform, run commands only if you have a good idea of what you are doing and why. Terraform warns of potentially destructive operations and requires additional confirmation in these cases. Pay attention to these warnings, since otherwise, you may inadvertently lose part or even all of your project’s infrastructure and data. And if there are no backups, the data will be lost forever.
As an example, let’s consider the description of the Terraform configuration to automatically create an infrastructure consisting of:
1 VPC (to isolate the project infrastructure at the network layer);
1 subnet with prefix /24;
2 VMs: one for a web application, and the other for a database server;
1 Elastic IP is an address assigned to the VM with the web application so that it (and the application) can be accessed from the Internet;
2 security groups: one group allows inbound traffic from the interfaces to which it is assigned (so that the VMs interact only with each other within the created subnet), while the other allows access from the outside over TCP ports 22, 80, and 443. All outbound traffic is allowed for each of the groups;
1 bucket to store project files.
Provider description — providers.tf
#
Terraform deals with various cloud platforms and services, using special plugins, which are called providers. To work with K2 Cloud, you can use K2 Cloud provider (c2devel/rockitcloud) or AWS provider (hashicorp/aws), since K2 Cloud API is AWS-compatible.
Create a providers.tf
file to describe the required providers and their settings:
providers.tf
# Select a specific provider version to ensure compatibility
# and stable operation of the developed configuration
terraform {
required_providers {
aws = {
# Use the K2 Cloud local mirror
# to download c2devel/rockitcloud provider
source = "hc-registry.website.k2.cloud/c2devel/rockitcloud"
version = "24.1.0"
}
}
}
# Connect and configure the provider to work
# with all K2 Cloud services except for object storage
provider "aws" {
endpoints {
ec2 = "https://ec2.k2.cloud"
}
skip_credentials_validation = true
skip_requesting_account_id = true
skip_region_validation = true
insecure = false
access_key = var.access_key
secret_key = var.secret_key
region = "region-1"
}
# Connect and configure the provider to work
# with the K2 Cloud object storage
provider "aws" {
alias = "noregion"
endpoints {
s3 = "https://s3.k2.cloud"
}
skip_credentials_validation = true
skip_requesting_account_id = true
skip_region_validation = true
insecure = false
access_key = var.access_key
secret_key = var.secret_key
region = "us-east-1"
}
The first provider
block allows you to interact with all K2 Cloud services except for object storage, while the second is just responsible for interacting with the object storage. If you plan to work with K2 Cloud only, you can reuse this part of the code without changes.
Note that access_key and secret_key do not contain the data itself but rather point to variable values. This is done on purpose so that a ready-to-use configuration can be handed over to other people without revealing the key values. In addition, this allows you to quickly define all keys in one place and avoid multiple edits in the code itself when they change.
Variable description — variables.tf
#
Information about all variables in use is stored in the variables.tf
file, where you can specify a description and the default value for each variable.
variables.tf
variable "secret_key" {
description = "Enter the secret key"
}
variable "access_key" {
description = "Enter the access key"
}
variable "public_key" {
description = "Enter the public SSH key"
}
variable "pubkey_name" {
description = "Enter the name of the public SSH key"
}
variable "bucket_name" {
description = "Enter the bucket name"
}
variable "az" {
description = "Enter availability zone (ru-msk-comp1p by default)"
default = "ru-msk-comp1p"
}
variable "eips_count" {
description = "Enter the number of Elastic IP addresses to create (1 by default)"
default = 1
}
variable "vms_count" {
description = "Enter the number of virtual machines to create (2 by default)"
default = 2
}
variable "hostnames" {
description = "Enter hostnames of VMs"
}
variable "allow_tcp_ports" {
description = "Enter TCP ports to allow connections to (22, 80, 443 by default)"
default = [22, 80, 443]
}
variable "vm_template" {
description = "Enter the template ID to create a VM from (cmi-AC76609F [CentOS 8.2] by default)"
default = "cmi-AC76609F"
}
variable "vm_instance_type" {
description = "Enter the instance type for a VM (m5.2small by default)"
default = "m5.2small"
}
variable "vm_volume_type" {
description = "Enter the volume type for VM disks (gp2 by default)"
default = "gp2"
}
variable "vm_volume_size" {
# Default size and increment are specified for the gp2 volume type
# For other volume types, they may differ (for details, see the volumes documentation)
description = "Enter the volume size for VM disks (32 by default, in GiB, must be multiple of 32)"
default = 32
}
The variables.tf
file contains only a list of all variables used in the configuration (and default values for some of them). The actual values are set in the terraform.tfvars
file.
Actual variable values — terraform.tfvars
#
Which values to apply in each case is specified in the file terraform.tfvars
. Its content takes precedence over the default values, making it easy to override the default configuration behaviour.
terraform.tfvars
secret_key = "ENTER_YOUR_SECRET_KEY_HERE"
access_key = "ENTER_YOUR_ACCESS_KEY_HERE"
public_key = "ENTER_YOUR_PUBLIC_KEY_HERE"
pubkey_name = "My-project-SSH-key"
bucket_name = "My-project-bucket"
az = "ru-msk-comp1p"
eips_count = 1
vms_count = 2
hostnames = ["webapp", "db"]
allow_tcp_ports = [22, 80, 443]
vm_template = "cmi-AC76609F"
vm_instance_type = "m5.2small"
vm_volume_type = "gp2"
vm_volume_size = 32
The template with all variables and their values is in the terraform.tfvars.example
file. To set variables faster, copy the file content to the terraform.tfvars
file and then change values as required:
cp terraform.tfvars.example terraform.tfvars
Warning
Remember that the terraform.tfvars
file may contain sensitive data such as your key values that should not be exposed to third parties. If you are using the Git system for storing and versioning Terraform configurations, make sure that the file is not committed to the repository. To avoid this, include an appropriate exclusion in .gitignore
. Also, if you share your Terraform configuration with other people, make sure you don’t share terraform.tfvars
. Leaking keys can result in third parties gaining control over your infrastructure.
You can obtain your secret_key and access_key values in the Cloud management console. Сlick the user login in the top right corner, select Profile Get API access settings.
В К2 Облаке поддерживаются 2048-разрядные ключи RSA. SSH-ключ можно сгенерировать, например, при помощи команды:
ssh-keygen -b 2048 -t rsa
Set the public key as the public_key value.
pubkey_name must include letters and digits only. bucket_name may additionally include dots and hyphens (see bucket naming conventions).
When all variables are described, and their values are set, you can start describing the main configuration.
Main configuration — main.tf
#
The code is written in the main configuration file main.tf
and ensures the future automatic performance of all critical actions on the infrastructure.
The configuration consists of code blocks, each of which, as a rule, is responsible for actions on objects of a particular type, for example, VMs or security groups. In Terraform, such blocks are called resources. Next, one by one, we consider all resource blocks required to describe the above configuration. Each block has comments explaining the changes being made.
First, create a VPC to isolate the project resources at the network layer:
Create VPC
resource "aws_vpc" "vpc" {
# Specify an IP address for the VPC network in CIDR notation (IP/Prefix)
cidr_block = "172.16.8.0/24"
# Enable support for the domain name resolution using K2 Cloud DNS servers
enable_dns_support = true
# Assign the Name tag to the created resource
tags = {
Name = "My project"
}
}
Next, define a subnet in the VPC we have just created (the CIDR block of the subnet must belong to the address space allocated to the VPC):
Creating a subnet
resource "aws_subnet" "subnet" {
# Specify the availability zone, in which the subnet will be created
# Take its value from the az variable
availability_zone = var.az
# Use the same CIDR block of IP addresses for the subnet as for the VPC
cidr_block = aws_vpc.vpc.cidr_block
# Specify the VPC where the subnet will be created
vpc_id = aws_vpc.vpc.id
# Create a subnet only after creating a VPC
depends_on = [aws_vpc.vpc]
# Include the az variable value and the Name tag for the VPC in the Name tag for the subnet
tags = {
Name = "Subnet in ${var.az} for ${lookup(aws_vpc.vpc.tags, "Name")}"
}
}
Next, add a public SSH key, which will later be used to access the VM:
Add SSH key
resource "aws_key_pair" "pubkey" {
# Specify the SSH key name (the value is taken from the pubkey_name variable)
key_name = var.pubkey_name
# and public key content
public_key = var.public_key
}
Create a bucket in the object storage to store website data and backups:
Creating a bucket
resource "aws_s3_bucket" "bucket" {
provider = aws.noregion
# Specify the storage name from the bucket_name variable
bucket = var.bucket_name
# Specify access permissions
acl = "private"
}
Allocate an Elastic IP to enable access to the web application server from the outside:
Allocate Elastic IP
resource "aws_eip" "eips" {
# Specify the number of allocated EIPs in the eips_count variable –
# this allows you to immediately allocate the required number of EIPs.
# In our case, the address is allocated to the first server only
count = var.eips_count
# Allocate within our VPC
vpc = true
# and only after the VPC creation
depends_on = [aws_vpc.vpc]
# Take the host name of the future VM from the hostnames variable with the array index
# as the value of the Name tag
tags = {
Name = "${var.hostnames[count.index]}"
}
}
Then create two security groups: one to allow access from all addresses over ports 22, 80 and 443, and the second to allow full access within itself. Later, add a VM with a web application to the first group and place both our servers in the second so that they can interact with each other:
Creating security groups
# Create a security group to enable access from the outside
resource "aws_security_group" "ext" {
# Within our VPC
vpc_id = aws_vpc.vpc.id
# specify the security group name
name = "ext"
# and description
description = "External SG"
# Define inbound rules
dynamic "ingress" {
# Specify the name of the variable, which will be used
# to iterate over all given ports
iterator = port
# Iterate over ports from the allow_tcp_ports port list
for_each = var.allow_tcp_ports
content {
# Set the range of ports (in our case, it consists of one port),
from_port = port.value
to_port = port.value
# protocol,
protocol = "tcp"
# and source IP address in CIDR notation (IP/Prefix)
cidr_blocks = ["0.0.0.0/0"]
}
}
# Define an outbound rule to enable all outbound IPv4 traffic
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
depends_on = [aws_vpc.vpc]
tags = {
Name = "External SG"
}
}
# Create an internal security group,
# within which all traffic between its members will be allowed
resource "aws_security_group" "int" {
vpc_id = aws_vpc.vpc.id
name = "int"
description = "Internal SG"
ingress {
from_port = 0
to_port = 0
protocol = "-1"
self = true
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
depends_on = [aws_vpc.vpc]
tags = {
Name = "Internal SG"
}
}
Now write a block of code to create VMs:
Creating instances
resource "aws_instance" "vms" {
# Take the number of VMs to create from the vms_count variable
count = var.vms_count
# the image ID to create the instance is taken from the vm_template variable
ami = var.vm_template
# the instance type of the VM to be created is taken from the vm_instance_type variable
instance_type = var.vm_instance_type
# Assign the instance an internal IP address from the previously created subnet in the VPC
subnet_id = aws_subnet.subnet.id
# Connect the internal security group to the created instance
vpc_security_group_ids = [aws_security_group.int.id]
# Add the previously created public SSH key to the server
key_name = var.pubkey_name
# Do not allocate or assign an external Elastic IP to the instance
associate_public_ip_address = false
# Activate monitoring of the instance
monitoring = true
# Create an instance only after the creation of:
# — subnet
# — internal security group
# — public SSH key
depends_on = [
aws_subnet.subnet,
aws_security_group.int,
aws_key_pair.pubkey,
]
tags = {
Name = "VM for ${var.hostnames[count.index]}"
}
# Create a volume to be attached to an instance
ebs_block_device {
# Instruct the system to delete the volume along with the instance
delete_on_termination = true
# Specify the device name in the format "disk<N>",
device_name = "disk1"
# its type
volume_type = var.vm_volume_type
# and size
volume_size = var.vm_volume_size
tags = {
Name = "Disk for ${var.hostnames[count.index]}"
}
}
}
After creating instances, assign an external security group to the first one:
Assigning security groups
resource "aws_network_interface_sg_attachment" "sg_attachment" {
# Get the external security group ID
security_group_id = aws_security_group.ext.id
# and the network interface ID of the first instance
network_interface_id = aws_instance.vms[0].primary_network_interface_id
# Assign a security group only after the creation of
# respective instance and security group
depends_on = [
aws_instance.vms,
aws_security_group.ext,
]
}
And an external Elastic IP:
Elastic IP association
resource "aws_eip_association" "eips_association" {
# Get the number of created EIPs
count = var.eips_count
# and assign each of them to instances one by one
instance_id = element(aws_instance.vms.*.id, count.index)
allocation_id = element(aws_eip.eips.*.id, count.index)
}
Output values — outputs.tf
#
The outputs.tf
file describes all values whose result becomes known after applying the configuration plan, as consecutive output
blocks.
The configuration is completed with a single output
block in our case. This block outputs the Elastic IP address of the web application server to the terminal so that the user does not need to look for it in the cloud web interface:
outputs.tf
output "ip_of_webapp" {
description = "IP of webapp"
# Take the value of the public IP address of the first instance
# and output it when Terraform finished its work
value = aws_eip.eips[0].public_ip
}
Thus, we can right away copy the IP address for the server connection and continue working with it.
Use of a ready-to-use configuration#
The described actions result in a Terraform configuration consisting of five files:
providers.tf
— a file with the settings for connecting to, and interacting with, services or platforms that will be used to deploy the infrastructure;variables.tf
— a description file of all used variables and their default values;terraform.tfvars
— a file with variable values, including secret and access keys, which is why it should be stored in a secure place inaccessible to third parties;main.tf
— the main configuration file describes the entire project infrastructure that Terraform manages;outputs.tf
— file with description of output values.
To deploy the infrastructure from this configuration, follow the steps below:
Clone the repository and navigate to the folder with configuration files:
git clone https://github.com/C2Devel/terraform-examples.git && cd terraform-examples/quick_start
Copy the environment variable template with their values from the example file:
cp terraform.tfvars.example terraform.tfvars
Be sure to make the necessary changes to the new file. To get the minimum working configuration, specify your secret_key and access_key in it to work with the K2 Cloud API.
Run the initialization command:
terraform init
Terraform uses this command to initialize the configuration, download all the necessary plugins and get ready to work with the infrastructure.
Run the command to generate a plan for the changes to make:
terraform plan
The terminal will display all the changes Terraform plans to make to the infrastructure.
Study the output carefully. If the proposed changes are the same as expected, apply them:
terraform apply
The plan will be displayed again. Carefully double-check it. To execute the plan, type
yes
and pressEnter
.
After some time, the entire infrastructure you have described will be created in K2 Cloud. If you need to make further changes to it, you should change the current Terraform configuration and reapply the plan.
To display the values of the output variables in the terminal again, enter the command:
terraform output
To remove the infrastructure created with Terraform, you can run the following command:
terraform destroy
The terminal will display the infrastructure deletion plan. To confirm the deletion, type yes
and press Enter
.
Important
Be extremely careful when running this command since the entire infrastructure described in the configuration is deleted.
To sum up, the main Terraform configuration, which is directly responsible for actions on the infrastructure, consists of blocks called resources. By changing the block sequence and type, you can create that very infrastructure your project requires, like in Lego.
For additional examples of how to use Terraform, as well as supported and non-supported parameters for each resource, see the cases
folder in our official terraform-examples repository on GitHub. The examples are compiled for AWS provider v3.63.0 (Terraform v0.14.0).