Operations with a classic cluster
In this article:
Operations with a classic cluster#
Attention
This section describes how to use the previous version of Kubernetes clusters in К2 Cloud. This version is no longer supported, so classic clusters cannot be created. Therefore, use EKS clusters to create Kubernetes clusters.
In the web interface of the service Kubernetes clusters you have the following actions:
Changing cluster parameters#
Change the number of worker nodes in the cluster#
You can change the number of worker nodes in the cluster, if necessary. To do this, go to the cluster page and edit the Number of worker nodes field in the Information tab. In the dialog window, you can specify the number of worker nodes to add or remove and change the instance type for the added nodes, if necessary. If you change the instance type, it will remain the same for existing worker nodes.
If you increase the number of worker nodes, a new instance is created, the required cluster components are installed on it, and the cluster is reconfigured to accommodate the new node. During this process, the cluster is in Not Ready state. After successful completion of this process, the cluster will go to the Ready state.
If you reduce the number of worker nodes, the nodes launched first are deleted first. The desired number of nodes is switched to the maintenance mode and then removed from the cluster using the Kubernetes API. After that, the released instances are deleted. During this process, the cluster is in Not Ready state. After the successful completion of this process, the cluster will go to the Ready state.
Note
To guarantee that a certain worker node is not deleted upon scaling, you can reduce the cluster size by selecting a specific node for deletion.
If the attempt to change the number of worker nodes fails, the cluster will continue to operate. A record with the failure details will be displayed on the Warnings tab.
Enable/disable certificate auto-renewal#
Kubernetes cluster uses PKI to securely exchange messages between its components. For the normal operation of the cluster, the certificates in use must be renewed in a timely manner.
The cluster-manager utility regularly checks the lifetime of certificates. If the remaining lifetime is less than two weeks, warnings are sent to the user.
You can renew certificates by yourself or enable the Certificate auto-renewal option. To enable or disable certificate auto-renewal on cluster master nodes, go to the cluster page and change the value of the Certificate auto-renewal field in the Information tab.
By default, certificate auto-renewal is available for all new clusters. If you have Kubernetes clusters where automatic renewal is not available (the Certificate auto-renewal field is not displayed), but you want to use this option, leave a request on the support portal or email at support@k2.cloud.
Attention
When certificate renewal is enabled, it is not recommended to renew them manually, since in case of an error, the cluster may fail.
When automatic renewal is disabled, you should monitor the certificate lifetime yourself and renew them when necessary.
Edit user data#
User data can be changed after the Kubernetes cluster has been created. In particular, this automatically executes new scripts when launching additional nodes to scale the cluster. User data can be edited in the Information tab on the cluster page. To do this, click the edit icon next to User data to edit parameters. In the dialog window, select the data type and enter the data.
Deleting resources#
Delete a worker node#
If a worker node is no longer needed (for example, you have transferred all its pods to another node) then it can be deleted. You can delete a worker node only when a Kubernetes cluster has the “Running” status and is in the “Ready” state.
Note
A worker node cannot be deleted if the “Scheduling disabled” option is enabled for all other worker nodes.
Attention
Before deleting a worker node, make sure that there are no pods to which Persistent Volumes are binded. Deleting a worker node with such volumes may render applications that use them temporarily unavailable. Recovery can take up to 10 minutes until the Kubernetes service mounts the volumes on another worker node.
To delete a worker node:
Go to Kubernetes Clusters Clusters.
Find the cluster in the table and click the cluster ID to go to its page.
Open the Instances tab.
Select the instance where the worker node is deployed, in the resource table.
Click Delete and confirm the action in the dialog window.
Deleting a Kubernetes cluster#
Deleting a cluster is deleting all instances, created for it. Instances created for additional services are also deleting. Volumes created by EBS-provider will be available for deletion in the Volumes subsection of the Storage section.
To delete a Kubernetes cluster and related services (Container Registry, EBS-provider), go to Kubernetes clusters Clusters, select the cluster and click Delete.
Attention
When you delete a cluster, the volume with the Docker Registry images will also be deleted!