Terraform by HashiCorp#

General information#

Terraform is an advanced tool for automated cloud infrastructure management. It uses a simple and expressive language similar to plain English. The code in this language is written in a declarative manner: you describe what you want to get and do not think about how to get it.

Once you have written such a code, you can reuse it many times: enter a couple of short commands in the terminal. And every time you will get a predictable result: the requested number of VMs will be created in the cloud from the specified images, the required number of external IP addresses will be allocated, security groups will be configured, and all the other actions described in the code will be performed. Performing the same actions in the web interface will take more time, especially if you need to repeat them. In addition, when doing this manually, you face a dramatically higher risk of making a mistake and getting something different from what you planned, with much time being spent than trying to understand why it went wrong.

This approach to infrastructure deployment is called “Infrastructure as a Code” (IaaC) and allows you to:

  • use version control systems;

  • comment the code to document what it does;

  • test the code before using it in an actual infrastructure to identify possible negative consequences;

  • hand over the code to other developers to evaluate its quality and finally get the best solution.

Installation and configuration#

Note

Инструкция написана и протестирована с использованием Terraform v1.12.1 для провайдера rockitcloud v25.2.0. Приведённая ниже информация актуальна для указанных версий. Чтобы гарантировать стабильность и совместимость, мы зафиксировали версию провайдера в коде конфигурации.

Terraform is distributed as an executable file with versions for Linux, Windows, macOS, etc. You can download the version you need from the official download page. If the official page is unavailable, download the installation package here. After downloading and extracting the archive, we recommend moving the extracted file to any folder specified in the current environment variable PATH or adding the target folder to this variable. For Linux, it can be /usr/local/bin/, while for Windows — C:\Windows\system32 (OS administrator rights are required to access system folders). Thus, you will not have to specify the full path to the file each time.

Local mirrors for common providers#

Providers from K2 Cloud’s local mirrors are identical to those from the original repositories. You can use them to avoid installation problems.

Note

For up-to-date information, follow the link.

Provider

Available versions

Link to the mirror

AWS

2.64.0 3.15.0 3.63.0 4.8.0

Terraform AWS Provider

Kubernetes

2.10.0 2.11.0

Terraform Kubernetes Provider

Random

3.3.2

Terraform Random Provider

Template

2.2.0

Terraform Template Provider

TLS

3.1.0

Terraform TLS Provider

Provider by K2 Cloud#

Attention

Starting from version 24.1.0, the provider croccloud is released under the new name rockitcloud. Previous versions of the provider croccloud are available in the official Terraform registry.

To switch to the rockitcloud provider, use the following instructions.

  1. Specify the new provider name and version in the configuration.

    terraform {
      required_providers {
        aws = {
          source  = "hc-registry.website.k2.cloud/c2devel/rockitcloud"
          version = "24.1.0"
        }
      }
    }
    
  2. If the Terraform state file is not empty, replace the provider for the resources that are in the file.

    terraform state replace-provider -state terraform.tfstate hc-registry.website.cloud.croc.ru/c2devel/croccloud hc-registry.website.k2.cloud/c2devel/rockitcloud
    
  3. Initialize the Terraform configuration again.

    terraform init
    

To work with K2 Cloud, you can use rockitcloud provider from C2Devel. It was created on the basis of AWS 4.14.0 provider and excludes Terraform resources not supported in K2 Cloud. At the same time, it has added functionality specific to K2 Cloud, such as st2 volumes.

For information about rockitcloud provider releases, visit the official repository.

The rockitcloud provider is included in the official Terraform registry . The provider page offers documentation on resources supported in K2 Cloud.

In addition to the official registry the current documentation version is available in the local mirror of K2 Cloud.

Note

The documentation contains a unified list of resources with a prefix that is automatically generated for each resource and matches the rockitcloud name. For compatibility with configurations for the AWS provider, we retained the aws prefix in resource description and usage examples.

Resource list by category#

Category

Terraform Resource

Terraform Datasource

Auto Scaling

Backup

CloudWatch


Direct Connect

EBS (EC2)

EC2 (Elastic Compute Cloud)

EKS (Elastic Kubernetes)

ELB (Elastic Load Balancing)

IAM (Identity & Access Management)

PaaS

Route53

Transit Gateway

S3 (Simple Storage)

VPC (Virtual Private Cloud)

VPN (Site-to-Site)

Describing Terraform configuration#

The ready-to-use code described below is stored in our official repository terraform-examples on GitHub in the quick_start folder. You can download and start using it right away with minimal edits. However, for a better code understanding, we recommend that you follow this guide’s steps and operations one by one.

Warning

When using Terraform, run commands only if you have a good idea of what you are doing and why. Terraform warns of potentially destructive operations and requires additional confirmation in these cases. Pay attention to these warnings, since otherwise, you may inadvertently lose part or even all of your project’s infrastructure and data. And if there are no backups, the data will be lost forever.

As an example, let’s consider the description of the Terraform configuration to automatically create an infrastructure consisting of:

  • 1 VPC (to isolate the project infrastructure at the network layer);

  • 1 subnet with prefix /24;

  • 2 VMs: one for a web application, and the other for a database server;

  • 1 Elastic IP is an address assigned to the VM with the web application so that it (and the application) can be accessed from the Internet;

  • 2 security groups: one group allows inbound traffic from the interfaces to which it is assigned (so that the VMs interact only with each other within the created subnet), while the other allows access from the outside over TCP ports 22, 80, and 443. All outbound traffic is allowed for each of the groups;

  • 1 bucket to store project files.

Provider description — providers.tf#

Terraform deals with various cloud platforms and services, using special plugins, which are called providers. To work with K2 Cloud, you can use K2 Cloud provider (c2devel/rockitcloud) or AWS provider (hashicorp/aws), since K2 Cloud API is AWS-compatible.

Create a providers.tf file to describe the required providers and their settings:

providers.tf
# Фиксируем версию провайдера, чтобы гарантировать совместимость
# и стабильную работу написанной конфигурации
terraform {
  required_providers {
    aws = {
      # Используем локальное зеркало K2 Cloud
      # как источник загрузки провайдера c2devel/rockitcloud
      source  = "hc-registry.website.k2.cloud/c2devel/rockitcloud"
      version = "~> 25.2"
    }
  }
}

# Подключаем и настраиваем провайдера для работы
# со всеми сервисами K2 Cloud
provider "aws" {
  insecure   = false
  access_key = var.access_key
  secret_key = var.secret_key

  # Указываем регион K2 Cloud
  region = "ru-msk"
}

Note that access_key and secret_key do not contain the data itself but rather point to variable values. This is done on purpose so that a ready-to-use configuration can be handed over to other people without revealing the key values. In addition, this allows you to quickly define all keys in one place and avoid multiple edits in the code itself when they change.

Если в конфигурации указан регион K2 Cloud, адреса для подключения к API будут сформированы провайдером. Для переопределения адресов API можно использовать блок provider.endpoints, тогда в качестве региона можно указать значение region-1.

The``provider.endpoints`` block#

Блок provider.endpoints позволяет явно задать адреса для подключения к API K2 Cloud. Необходимый набор адресов зависит от используемых ресурсов. Таблица соответствия между категорией ресурса и именем эндпоинта приведена в документации провайдера.

You can get addresses and other settings for API access in the user profile.

provider "aws" {
  endpoints {
    autoscaling   = "https://autoscaling.ru-msk.k2.cloud"
    backup        = "https://backup.ru-msk.k2.cloud"
    cloudwatch    = "https://cloudwatch.ru-msk.k2.cloud"
    directconnect = "https://directconnect.ru-msk.k2.cloud"
    ec2           = "https://ec2.ru-msk.k2.cloud"
    eks           = "https://eks.ru-msk.k2.cloud"
    elbv2         = "https://elb.ru-msk.k2.cloud"
    iam           = "https://iam.k2.cloud"
    paas          = "https://paas.ru-msk.k2.cloud"
    route53       = "https://route53.k2.cloud"
  }

  # ...
}

Note

We recommend always specifying the ec2 endpoint. The provider uses it to send service requests.

Variable description — variables.tf#

Information about all variables in use is stored in the variables.tf file, where you can specify a description and the default value for each variable.

variables.tf
variable "secret_key" {
  description = "Enter the secret key"
}

variable "access_key" {
  description = "Enter the access key"
}

variable "public_key" {
  description = "Enter the public SSH key"
}

variable "pubkey_name" {
  description = "Enter the name of the public SSH key"
}

variable "bucket_name" {
  description = "Enter the bucket name"
}

variable "az" {
  description = "Enter availability zone (ru-msk-comp1p by default)"
  default     = "ru-msk-comp1p"
}

variable "eips_count" {
  description = "Enter the number of Elastic IP addresses to create (1 by default)"
  default     = 1
}

variable "vms_count" {
  description = "Enter the number of virtual machines to create (2 by default)"
  default     = 2
}

variable "hostnames" {
  description = "Enter hostnames of VMs"
}

variable "allow_tcp_ports" {
  description = "Enter TCP ports to allow connections to (22, 80, 443 by default)"
  default     = [22, 80, 443]
}

variable "vm_template" {
  description = "Enter the template ID to create a VM from (cmi-DC1CBC52 [Centos 9 Stream] by default)"
  default     = "cmi-DC1CBC52"
}

variable "vm_instance_type" {
  description = "Enter the instance type for a VM (m5.2small by default)"
  default     = "m5.2small"
}

variable "vm_volume_type" {
  description = "Enter the volume type for VM disks (gp2 by default)"
  default     = "gp2"
}

variable "vm_volume_size" {
  # Размер по умолчанию и шаг наращивания указаны для типа дисков gp2
  # Для других типов дисков они могут быть иными — подробнее см. в документации на диски
  description = "Enter the volume size for VM disks (32 by default, in GiB, must be multiple of 32)"
  default     = 32
}

The variables.tf file contains only a list of all variables used in the configuration (and default values for some of them). The actual values are set in the terraform.tfvars file.

Actual variable values — terraform.tfvars#

Which values to apply in each case is specified in the file terraform.tfvars. Its content takes precedence over the default values, making it easy to override the default configuration behaviour.

terraform.tfvars
secret_key       = "ENTER_YOUR_SECRET_KEY_HERE"
access_key       = "ENTER_YOUR_ACCESS_KEY_HERE"
public_key       = "ENTER_YOUR_PUBLIC_KEY_HERE"
pubkey_name      = "My-project-SSH-key"
bucket_name      = "My-project-bucket"
az               = "ru-msk-comp1p"
eips_count       = 1
vms_count        = 2
hostnames        = ["webapp", "db"]
allow_tcp_ports  = [22, 80, 443]
vm_template      = "cmi-DC1CBC52"
vm_instance_type = "m5.2small"
vm_volume_type   = "gp2"
vm_volume_size   = 32

The template with all variables and their values is in the terraform.tfvars.example file. To set variables faster, copy the file content to the terraform.tfvars file and then change values as required:

cp terraform.tfvars.example terraform.tfvars

Warning

Помните, что в файле terraform.tfvars могут храниться чувствительные данные, которые не должны попасть к посторонним, например, значения ваших ключей. Если вы используете систему Git для хранения и версионирования конфигураций Terraform, убедитесь, что файл не попадёт в репозиторий в результате коммита — этого можно избежать, включив соответствующее исключение в .gitignore. Кроме того, если вы передаёте другими людям свою конфигурацию Terraform, убедитесь, что при этом не передаёте terraform.tfvars. Утечка ключей может привести к тому, что посторонние лица получат доступ к управлению вашей инфраструктурой.

You can obtain your secret_key and access_key values in the сloud management console. Сlick the user login in the top right corner, select Profile Get API access settings.

K2 Cloud supports 2048-bit RSA keys. An SSH key can be generated, for example, by the command:

ssh-keygen -b 2048 -t rsa

Set the public key as the public_key value.

pubkey_name must include letters and digits only. bucket_name may additionally include dots and hyphens (see bucket naming conventions).

When all variables are described, and their values are set, you can start describing the main configuration.

Main configuration — main.tf#

The code is written in the main configuration file main.tf and ensures the future automatic performance of all critical actions on the infrastructure.

The configuration consists of code blocks, each of which, as a rule, is responsible for actions on objects of a particular type, for example, VMs or security groups. In Terraform, such blocks are called resources. Next, one by one, we consider all resource blocks required to describe the above configuration. Each block has comments explaining the changes being made.

First, create a VPC to isolate the project resources at the network layer:

Create VPC
resource "aws_vpc" "vpc" {
  # Задаём IP-адрес сети VPC в нотации CIDR (IP/Prefix)
  cidr_block         = "172.16.8.0/24"
  # Активируем поддержку разрешения доменных имён с помощью DNS-серверов K2 Cloud
  enable_dns_support = true

  # Присваиваем создаваемому ресурсу тег Name
  tags = {
    Name = "My project"
  }
}

Next, define a subnet in the VPC we have just created (the CIDR block of the subnet must belong to the address space allocated to the VPC):

Creating a subnet
resource "aws_subnet" "subnet" {
  # Задаём зону доступности, в которой будет создана подсеть
  # Её значение берём из переменной az
  availability_zone = var.az
  # Используем для подсети тот же CIDR-блок IP-адресов, что и для VPC
  cidr_block        = aws_vpc.vpc.cidr_block
  # Указываем VPC, где будет создана подсеть
  vpc_id            = aws_vpc.vpc.id

  # В тег Name для подсети включаем значение переменной az и тег Name для VPC
  tags = {
    Name = "Subnet in ${var.az} for ${lookup(aws_vpc.vpc.tags, "Name")}"
  }
}

Для предоставления доступа VPC в интернет необходимо создать интернет-шлюз и добавить в таблицу маршрутизации маршрут через него:

Создание интернет-шлюза и маршрута
resource "aws_internet_gateway" "igw" {
  # Указываем VPC, к которому будет присоединён интернет-шлюз
  vpc_id = aws_vpc.vpc.id

  # В тег Name для интернет-шлюза включаем тег Name для VPC
  tags = {
    Name = "IGW for ${lookup(aws_vpc.vpc.tags, "Name")}"
  }
}

resource "aws_route" "igw_route" {
  # Выбираем основную таблицу маршрутизации VPC
  route_table_id         = aws_vpc.vpc.main_route_table_id
  # Указываем IP-адрес сети назначения в нотации CIDR (IP/Prefix)
  destination_cidr_block = "0.0.0.0/0"
  # Указываем в качестве шлюза созданный интернет-шлюз
  gateway_id             = aws_internet_gateway.igw.id
}

Next, add a public SSH key, which will later be used to access the VM:

Add SSH key
resource "aws_key_pair" "pubkey" {
  # Specify the SSH key name (the value is taken from the pubkey_name variable)
  key_name   = var.pubkey_name
  # and public key content
  public_key = var.public_key
}

Create a bucket in the object storage to store website data and backups:

Creating a bucket
resource "aws_s3_bucket" "bucket" {
  # Задаём имя хранилища из переменной bucket_name
  bucket = var.bucket_name
}

resource "aws_s3_bucket_acl" "bucket_acl" {
  # Указываем разрешения на доступ к созданному бакету
  bucket = aws_s3_bucket.bucket.id
  acl    = "private"
}

Allocate an Elastic IP to enable access to the web application server from the outside:

Allocating Elastic IP
resource "aws_eip" "eips" {
  # Указываем количество выделяемых EIP в переменной eips_count –
  # это позволяет сразу выделить необходимое количество EIP.
  # В нашем случае адрес выделяется только первому серверу
  count = var.eips_count
  # Выделяем в рамках нашего VPC
  vpc = true

  # В качестве значения тега Name берём имя хоста будущей ВМ из переменной hostnames
  # по индексу из массива
  tags = {
    Name = "${var.hostnames[count.index]}"
  }
}

Then create two security groups: one to allow access from all addresses over ports 22, 80 and 443, and the second to allow full access within itself. Later, add a VM with a web application to the first group and place both our servers in the second so that they can interact with each other:

Creating security groups
# Создаём группу безопасности для доступа извне
resource "aws_security_group" "ext" {
  # В рамках нашего VPC
  vpc_id = aws_vpc.vpc.id
  # задаём имя группы безопасности
  name = "ext"
  # и её описание
  description = "External SG"

  # Определяем входящие правила
  dynamic "ingress" {
    # Задаём имя переменной, которая будет использоваться
    # для перебора всех заданных портов
    iterator = port
    # Перебираем порты из списка портов allow_tcp_ports
    for_each = var.allow_tcp_ports
    content {
      # Задаём диапазон портов (в нашем случае он состоит из одного порта),
      from_port = port.value
      to_port   = port.value
      # протокол,
      protocol = "tcp"
      # и IP-адрес источника в нотации CIDR (IP/Prefix)
      cidr_blocks = ["0.0.0.0/0"]
    }
  }

  # Определяем исходящее правило — разрешаем весь исходящий IPv4-трафик
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "External SG"
  }
}

# Создаём внутреннюю группу безопасности,
# внутри которой будет разрешён весь трафик между её членами
resource "aws_security_group" "int" {
  vpc_id      = aws_vpc.vpc.id
  name        = "int"
  description = "Internal SG"

  ingress {
    from_port = 0
    to_port   = 0
    protocol  = "-1"
    self      = true
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "Internal SG"
  }
}

Now write a block of code to create VMs:

Creating instances
resource "aws_instance" "vms" {
  # Количество создаваемых виртуальных машин берём из переменной vms_count
  count = var.vms_count
  # ID образа для создания экземпляра ВМ — из переменной vm_template
  ami = var.vm_template
  # Наименование типа экземпляра создаваемой ВМ — из переменной vm_instance_type
  instance_type = var.vm_instance_type
  # Назначаем экземпляру внутренний IP-адрес из созданной ранее подсети в VPC
  subnet_id = aws_subnet.subnet.id
  # Подключаем к создаваемому экземпляру внешнюю и внутреннюю группы безопасности
  vpc_security_group_ids = [
    aws_security_group.ext.id,
    aws_security_group.int.id,
  ]
  # Добавляем на сервер публичный SSH-ключ, созданный ранее
  key_name = aws_key_pair.pubkey.key_name

  tags = {
    Name = "VM for ${var.hostnames[count.index]}"
  }

  # Создаём диск, подключаемый к экземпляру
  ebs_block_device {
    # Говорим удалять диск вместе с экземпляром
    delete_on_termination = true
    # Задаём имя устройства вида "disk<N>",
    device_name = "disk1"
    # его тип
    volume_type = var.vm_volume_type
    # и размер
    volume_size = var.vm_volume_size

    tags = {
      Name = "Disk for ${var.hostnames[count.index]}"
    }
  }
}

После создания экземпляров виртуальных машин назначаем первому Elastic IP:

Elastic IP association
resource "aws_eip_association" "eips_association" {
  # Назначение EIP возможно только после присоединения интернет-шлюза к VPC
  depends_on = [aws_internet_gateway.igw]

  # Получаем количество созданных EIP
  count         = var.eips_count
  # и по очереди назначаем каждый из них экземплярам
  instance_id   = element(aws_instance.vms.*.id, count.index)
  allocation_id = element(aws_eip.eips.*.id, count.index)
}

Output values — outputs.tf#

The outputs.tf file describes all values whose result becomes known after applying the configuration plan, as consecutive output blocks.

The configuration is completed with a single output block in our case. This block outputs the Elastic IP address of the web application server to the terminal so that the user does not need to look for it in the cloud web interface:

outputs.tf
output "ip_of_webapp" {
  description = "IP of webapp"
  # Take the value of the public IP address of the first instance
  # and output it when Terraform finished its work
  value       = aws_eip.eips[0].public_ip
}

Thus, we can right away copy the IP address for the server connection and continue working with it.

Use of a ready-to-use configuration#

The described actions result in a Terraform configuration consisting of five files:

  • providers.tf — a file with the settings for connecting to, and interacting with, services or platforms that will be used to deploy the infrastructure;

  • variables.tf — a description file of all used variables and their default values;

  • terraform.tfvars— a file with variable values, including secret and access keys, which is why it should be stored in a secure place inaccessible to third parties;

  • main.tf — the main configuration file describes the entire project infrastructure that Terraform manages;

  • outputs.tf — file with description of output values.

To deploy the infrastructure from this configuration, follow the steps below:

  1. Clone the repository and navigate to the folder with configuration files:

    git clone https://github.com/C2Devel/terraform-examples.git && cd terraform-examples/quick_start
    
  2. Copy the environment variable template with their values from the example file:

    cp terraform.tfvars.example terraform.tfvars
    

    Be sure to make the necessary changes to the new file. To get the minimum working configuration, specify your secret_key and access_key in it to work with the K2 Cloud API.

  3. Run the initialization command:

    terraform init
    

    Terraform uses this command to initialize the configuration, download all the necessary plugins and get ready to work with the infrastructure.

  4. Run the command to generate a plan for the changes to make:

    terraform plan
    

    The terminal will display all the changes Terraform plans to make to the infrastructure.

  5. Study the output carefully. If the proposed changes are the same as expected, apply them:

    terraform apply
    

    The plan will be displayed again. Carefully double-check it. To execute the plan, type yes and press Enter.

After some time, the entire infrastructure you have described will be created in K2 Cloud. If you need to make further changes to it, you should change the current Terraform configuration and reapply the plan.

To display the values of the output variables in the terminal again, enter the command:

terraform output

To remove the infrastructure created with Terraform, you can run the following command:

terraform destroy

The terminal will display the infrastructure deletion plan. To confirm the deletion, type yes and press Enter.

Important

Be extremely careful when running this command since the entire infrastructure described in the configuration is deleted.

To sum up, the main Terraform configuration, which is directly responsible for actions on the infrastructure, consists of blocks called resources. By changing the block sequence and type, you can create that very infrastructure your project requires, like in Lego.

For additional examples of how to use Terraform, as well as supported and non-supported parameters for each resource, see the cases folder in our official terraform-examples repository on GitHub. The examples are compiled for AWS provider v3.63.0 (Terraform v0.14.0).