Kubernetes clusters
In this article:
Kubernetes clusters#
General information#
Google has designed Kubernetes as an open-source container orchestration platform for deploying, scaling, and managing containerized applications. The platform has become a standard for container orchestration and the flagship project of The Cloud Native Computing Foundation, supported by Google, AWS, Microsoft, IBM, Intel, Cisco, and Red Hat.
In K2 Cloud Kubernetes creates an abstraction layer above a group of instances and allows to easily implement and use applications with microservice architecture.
For more information about Kubernetes see our blog and official website.
Glossary#
Cluster is a basic Kubernetes element. K2 Cloud supports Kubernetes clusters based on Elastic Kubernetes Service (EKS). These clusters include master nodes and Ingress controllers, while node groups are created separately.
Master node is a control plane cluster node, it hosts service applications that cluster needs to operate.
Worker node is a compute node where user tasks are performed.
Node group is a group of compute nodes managed by a EKS cluster. The number of nodes in the group can alter dynamically depending on the workload. To create node groups, use Auto Scaling service.
Pod — a number of containers that share the network, the IP address and other resources (storage, labels).
Worker node labels are key-value pairs assigned to worker nodes. They are similar to tags, but are defined in the Kubernetes API paradigm. You can, for example, specify how workers are used (environment, release, etc.) with the help of labels.
Taints are applied to all worker nodes in the group. They prohibit the scheduler to place pods on these nodes, if there are no appropriate permissions (tolerations) for pods.
Kubernetes clusters in K2 Cloud#
You can quickly launch containerized applications integrated with K2 Cloud services. This allows you to efficiently distribute traffic and scale clusters in a secure and stable cloud infrastructure. You can manage security groups, link Kubernetes clusters to existing instances, use object storage, and configure VPN connections between your infrastructure and Kubernetes clusters.
You can manage the service via the web interface or API.
Note
Resources required for Kubernetes clusters are prepared and maintained using the system user. Action Log logs all API calls to Kubernetes clusters service, including system requests to create, modify, and delete resources required for the service to run.
K2 Cloud supports the following Kubernetes versions:
1.33.1;
1.32.5;
1.31.9;
1.30.2;
Additional services can be installed in Kubernetes Clusters service:
Ingress controller, which can be used to route all requests, coming from outside to applications, deployed in Kubernetes.
EBS-provider allows Kubernetes to manage volumes in K2 Cloud and use them as Persistent Volumes.
NLB Provider automatically deploys the network load balancer in K2 Cloud if a load balancer is specified in Kubernetes. This allows you, among other things, to improve fault tolerance by distributing TCP traffic among the Kubernetes cluster pods.
Docker Registry is setup for use in a Kubernetes cluster. In Docker Registry, you can store container images for later deployment in Kubernetes.
ELB Provider simplifies the deployment of load balancers for Kubernetes clusters in K2 Cloud. It can be used to create load balancers for distributing both Internet and internal traffic.
Kubernetes Dashboard provides a web interface for working with Kubernetes. The dashboard can be used to deploy containerized applications, troubleshoot applications, manage cluster resources, and more.
Cluster Autoscaler automatically scales node groups based on current load. It monitors whether all scheduled pods are deployed and whether there are any underutilized nodes. If necessary, Cluster Autoscaler adds nodes to the cluster or removes no-longer-needed ones.
When for do you need Kubernetes clusters service?
If you need fast deployment and scaling of developer stands.
If you have the infrastructure with a large number of changes and releases.
If you have a floating workload depending on the number of users.
If time to market is important for you.
If you have applications with microservice architecture.
Before you begin#
To be able to work with Kubernetes Clusters service, a user needs to have EKSFullAccess project grants. For instance, such grants are available to administrators in the CloudAdministrators group. If necessary, you can create a separate user, add the user to the project, and either attach the EKSFullAccess policy to it or add the user to the group of cloud administrators in this project.
In addition, the project should have the following resources:
If you need EBS, ELB, or Cluster Autoscaler provider, go to the IAM section and create a user for each provider, add these users to the project, and assign them EKSCSIPolicy, EKSNLBPolicy, and EKSClusterAutoscalerPolicy, respectively. Alternatively, you can create a service user to work with all providers and assign it EKSClusterPolicy.
Managing Kubernetes cluster#
To work with a Kubernetes cluster, you can use any familiar tools: command line kubectl or Draft, Helm, Terraform, etc. See the official Kubernetes documentation for more information on the Kubernetes tools.
To start working with the cluster directly from a master node, connect to it manually via SSH and do the following:
Set the environment variable
KUBECONFIG:export KUBECONFIG=/root/.kube/config
Start the proxy server:
kubectl proxy &
Managing a Kubernetes cluster with kubectl#
Important
To provide access to the cluster API server, allow access via port 6443.
To manage the cluster, install kubectl on your local computer by running the following commands:
Download and install the kubectl client:
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.25.0/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl kubectl version --client #command to verify installation
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.25.0/bin/windows/amd64/kubectl.exe Move the executable file to a directory from the PATH environment variable and check the kubectl installation result: kubectl version --client Docker Desktop for Windows adds its own kubectl version to the PATH environment variable.. Check for Docker Desktop on your local computer. If it is installed, you need to put the path to the installed binary file before the entry that the Docker Desktop installer has added, or remove kubectl, which is installed with Docker Desktop.
Download its configuration from cloud console to your local computer.
Set the local environment variable:
export KUBECONFIG=<config-file>
set KUBECONFIG=<config-file>
Start the proxy server:
kubectl proxy &
The “High-Availability cluster” mode#
In this mode, a cluster starts in a configuration with three master nodes. Kubernetes cluster master nodes can be deployed in either three availability zones (if available in the region) or a placement group within an availability zone. Distribution across multiple physical computing nodes allows the cluster to remain operational if one master node fails.
If an Elastic IP has been assigned to the failed node, it will be reassigned to a healthy master node.
Usage restrictions#
The following default quotas are allocated for Kubernetes clusters:
5 Kubernetes clusters per project in each region;
5 node groups for all clusters in a project (node group quota is allocated within the scope of the Auto Scaling quota).
Quotas for other resources are allocated within the scope of overall quotas for cloud resources (instances, volumes, etc.). If you have to extend the quotas, contact support service.