Kubernetes clusters
In this article:
Kubernetes clusters#
General information#
Google has designed Kubernetes as an open-source container orchestration platform for deploying, scaling, and managing containerized applications. The platform has become a standard for container orchestration and the flagship project of The Cloud Native Computing Foundation, supported by Google, AWS, Microsoft, IBM, Intel, Cisco, and Red Hat.
In K2 Cloud Kubernetes creates an abstraction layer above a group of instances and allows to easily implement and use applications with microservice architecture. For more information about Kubernetes see our blog and official website.
Glossary#
Кластер — основной элемент Kubernetes. В К2 Облаке поддерживаются кластеры Kubernetes на базе Elastic Kubernetes Service (EKS), в состав которых входят мастер-узлы и Ingress-контроллеры, а группы рабочих узлов создаются отдельно.
Master node is a control plane cluster node, it hosts service applications that cluster needs to operate.
Worker node is a compute node where user tasks are performed.
Node group is a group of compute nodes managed by a EKS cluster. The number of nodes in the group can alter dynamically depending on the workload. To create node groups, use Auto Scaling service.
Pod — a number of containers that share the network, the IP address and other resources (storage, labels).
Worker node labels are key-value pairs assigned to worker nodes. They are similar to tags, but are defined in the Kubernetes API paradigm. You can, for example, specify how workers are used (environment, release, etc.) with the help of labels.
Taints are applied to all worker nodes in the group. They prohibit the scheduler to place pods on these nodes, if there are no appropriate permissions (tolerations) for pods.
Kubernetes Clusters in K2 Cloud#
You can quickly launch containerized applications integrated with K2 Cloud services. This allows you to efficiently distribute traffic and scale clusters in a secure and stable cloud infrastructure. You can manage security groups, link Kubernetes clusters to existing instances, use object storage, and configure VPN connections between your infrastructure and Kubernetes clusters.
You can manage the service via the web interface or API.
Note
Resources required for Kubernetes clusters are prepared and maintained using the system user. Action Log logs all API calls to Kubernetes clusters service, including system requests to create, modify, and delete resources required for the service to run.
K2 Cloud supports the following Kubernetes versions:
1.30.2;
1.29.4;
1.28.9;
1.27.3;
1.26.6;
1.25.11.
Additional services can be installed in Kubernetes Clusters service:
Ingress controller, which can be used to route all requests, coming from outside to applications, deployed in Kubernetes.
EBS-provider allows Kubernetes to manage volumes in K2 Cloud and use them as Persistent Volumes.
Docker Registry is setup for use in a Kubernetes cluster. In Docker Registry, you can store container images for later deployment in Kubernetes.
NLB Provider упрощает развёртывание балансировщиков нагрузки для кластеров Kubernetes в К2 Облаке. С его помощью можно создавать балансировщики для распределения как внешнего, так и внутреннего трафика.
Kubernetes Dashboard предоставляет веб-интерфейс для работы с Kubernetes. Панель мониторинга можно использовать для развёртывания контейнерных приложений, устранения неполадок в приложениях, управления ресурсами кластера и других задач.
When for do you need Kubernetes clusters service?
Для быстрого развёртывания и масштабирования стендов для разработчиков.
If you have the infrastructure with a large number of changes and releases.
При поддержке плавающей нагрузки с меняющимся количеством пользователей.
Когда важен Time to Market.
If you have applications with microservice architecture.
Before you begin#
To be able to work with Kubernetes Clusters service, a user needs to have EKSFullAccess project grants. For instance, such grants are available to administrators in the CloudAdministrators group. If necessary, you can create a separate user, add the user to the project, and either attach the EKSFullAccess policy to it or add the user to the group of cloud administrators in this project.
In addition, the project should have the following resources:
If you need an EBS provider, in the IAM section create a user, add the user to the project, and assign the EKSCSIPolicy policy.
Managing Kubernetes cluster#
To work with a Kubernetes cluster, you can use any familiar tools: command line kubectl or Draft, Helm, Terraform, etc. See the official Kubernetes documentation for more information on the Kubernetes tools.
To start working with the cluster directly from a master node, connect to it manually via SSH and do the following:
Set the environment variable
KUBECONFIG
:export KUBECONFIG=/root/.kube/config
Start the proxy server:
kubectl proxy &
Managing a Kubernetes cluster with kubectl#
Important
To provide access to the cluster API server, allow access via port 6443
.
To manage the cluster, install kubectl on your local computer by running the following commands:
Download and install the kubectl client:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.25.0/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl kubectl version --client #command to verify installation
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.25.0/bin/windows/amd64/kubectl.exe Move the executable file to a directory from the PATH environment variable and check the kubectl installation result: kubectl version --client Docker Desktop for Windows adds its own kubectl version to the PATH environment variable.. Check for Docker Desktop on your local computer. If it is installed, you need to put the path to the installed binary file before the entry that the Docker Desktop installer has added, or remove kubectl, which is installed with Docker Desktop.
Download its configuration from cloud console to your local computer.
Set the local environment variable:
export KUBECONFIG=<config-file>
set KUBECONFIG=<config-file>
Start the proxy server:
kubectl proxy &
The “High-Availability cluster” mode#
In this mode, a cluster starts in a configuration with three master nodes. Kubernetes cluster master nodes can be deployed in either three availability zones or a placement group within an availability zone. Distribution across multiple physical computing nodes allows the cluster to remain operational if one master node fails.
If an Elastic IP has been assigned to the failed node, it will be reassigned to a healthy master node.
Usage restrictions#
The following default quotas are allocated for Kubernetes clusters:
5 кластеров Kubernetes суммарно в одном проекте;
5 групп рабочих узлов в одном проекте (квота на группы рабочих узлов выделяется в рамках квоты на Auto Scaling).
Quotas for other resources are allocated within the scope of overall quotas for cloud resources (instances, volumes, etc.). If you have to extend the quotas, contact K2 Cloud support.