Kubernetes clusters
In this article:
Kubernetes clusters#
General information#
Google has designed Kubernetes as an open-source container orchestration platform for deploying, scaling, and managing containerized applications. The platform has become a standard for container orchestration and the flagship project of The Cloud Native Computing Foundation, supported by Google, AWS, Microsoft, IBM, Intel, Cisco, and Red Hat.
In K2 Cloud Kubernetes creates an abstraction layer above a group of instances and allows to easily implement and use applications with microservice architecture. For more information about Kubernetes see our blog and official website.
Glossary#
Cluster is a basic Kubernetes element. K2 Cloud supports two versions of Kubernetes clusters with different architectures. EKS clusters featuring new architecture include master nodes and Ingress controllers. In this version, node groups are created separately, unlike the previous version where worker nodes are a part of a Kubernetes cluster.
Master node is a control plane cluster node, it hosts service applications that cluster needs to operate.
Worker node is a compute node where user tasks are performed.
Node group is a group of compute nodes managed by a EKS cluster. The number of nodes in the group can alter dynamically depending on the workload. To create node groups, use Auto Scaling service.
Pod — a number of containers that share the network, the IP address and other resources (storage, labels).
Worker node labels are key-value pairs assigned to worker nodes. They are similar to tags, but are defined in the Kubernetes API paradigm. You can, for example, specify how workers are used (environment, release, etc.) with the help of labels.
Taints are applied to all worker nodes in the group. They prohibit the scheduler to place pods on these nodes, if there are no appropriate permissions (tolerations) for pods.
Kubernetes Clusters in K2 Cloud#
You can quickly launch containerized applications integrated with K2 Cloud services. This allows you to efficiently distribute traffic and scale clusters in a secure and stable cloud infrastructure. You can manage security groups, link Kubernetes clusters to existing instances, use object storage, and configure VPN connections between your infrastructure and Kubernetes clusters.
You can manage the service via the web interface or API.
Note
Resources required for Kubernetes clusters are prepared and maintained using the system user. Action Log logs all API calls to Kubernetes clusters service, including system requests to create, modify, and delete resources required for the service to run.
K2 Cloud supports the following Kubernetes versions:
1.30.2;
1.29.4;
1.28.9;
1.27.3;
1.26.6;
1.25.11.
Additional services can be installed in Kubernetes Clusters service:
Ingress controller, which can be used to route all requests, coming from outside to applications, deployed in Kubernetes.
EBS-provider allows Kubernetes to manage volumes in K2 Cloud and use them as Persistent Volumes.
Docker Registry is setup for use in a Kubernetes cluster. In Docker Registry, you can store container images for later deployment in Kubernetes.
When for do you need Kubernetes clusters service?
If you need fast deployment of scalable developer stands.
If you have the infrastructure with a large number of changes and releases.
If you have a floating workload depending on the number of users.
If time to market is very important for you.
If you have applications with microservice architecture.
Before you begin#
To begin operating with the Kubernetes Cluster service, you need:
Create a project, if you don’t have one.
In the IAM section, create a user, add it to the created project, and assign the EKSFullAccess policy to it to grant the required privileges to work with the Kubernetes service.
Note
If you want to give the user access to other cloud services, then add this user to the group CloudAdministrators. In this case, assigning the EKSFullAccess policy is not required.
Make sure that the project has all the required resources – subnets, SSH keys, and security groups. Otherwise, create them.
If you need an EBS provider, in the IAM section create a user, add the user to the project, and assign the EKSCSIPolicy policy.
Managing Kubernetes cluster#
To work with a Kubernetes cluster, you can use any familiar tools: command line kubectl or Draft, Helm, Terraform, etc. See the official Kubernetes documentation for more information on the Kubernetes tools.
To start working with the cluster directly from a master node, connect to it manually via SSH and do the following:
Set the environment variable
KUBECONFIG
:export KUBECONFIG=/root/.kube/config
Start the proxy server:
kubectl proxy &
Managing a Kubernetes cluster with kubectl#
Important
To provide access to the cluster API server, allow access via port 6443
.
To manage the cluster, install kubectl on your local computer by running the following commands:
Download the kubectl client:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.0/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl kubectl version --client #command to verify installation
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.23.0/bin/windows/amd64/kubectl.exe
Move the executable file to a directory from the
PATH
environment variable and check the kubectl installation result:kubectl version --client
Attention
Docker Desktop for Windows adds its own kubectl version to the
PATH
environment variable. Check for Docker Desktop on your local computer. If it is installed, you need to put the path to the installed binary file before the entry that the Docker Desktop installer has added, or remove kubectl, which is installed with Docker Desktop.Download its configuration from cloud console to your local computer.
Set the local environment variable:
export KUBECONFIG=<config-file>
set KUBECONFIG=<config-file>
Start the proxy server:
kubectl proxy &
The “High-Availability cluster” mode#
In this mode, a cluster starts in a configuration with three master nodes. Kubernetes cluster master nodes can be deployed in either three availability zones or a placement group within an availability zone. Distribution across multiple physical computing nodes allows the cluster to remain operational if one master node fails.
If an Elastic IP has been assigned to the failed node, it will be reassigned to a healthy master node.
Usage restrictions#
The following default quotas are allocated for Kubernetes clusters:
5 Kubernetes clusters per project in total regardless of the types;
5 node groups for all EKS clusters in a project (node group quota is allocated within the scope of the Auto Scaling quota).
Quotas for other resources are allocated within the scope of overall quotas for cloud resources (instances, volumes, etc.). If you have to extend the quotas, contact K2 Cloud support.