Extra services
In this article:
Extra services#
Ingress Controller#
If you choose the Ingress Controller option when creating a cluster, an extra worker node will be deployed in the cluster. It allows you to configure access to applications running inside the cluster through a single entry point. Create Ingress resource to make the service available via Ingress Controller.
The example below shows a cluster configuration with Ingress Controller Elastic IP: 185.12.31.21
. It is required to grant access via http://185.12.31.211/example to the application, deployed in the cluster.
Example of the service configuration in a cluster
apiVersion: v1
kind: Pod
metadata:
name: example
labels:
k8s-app: example
spec:
containers:
- name: example-app
image: quay.io/coreos/example-app:v1.0
imagePullSecrets:
- name: regcred
---
kind: Service
apiVersion: v1
metadata:
name: example-service
namespace: default
spec:
selector:
k8s-app: example
ports:
- protocol: TCP
port: 80
type: LoadBalancer
To make your application available via http://185.12.31.211/example, you have to open the 80
port and create the configuration of Ingress:
Example of the Ingress configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /example
pathType: Prefix
backend:
serviceName: example-service
servicePort: 80
Example of the Ingress configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /example
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
HTTPS setup on Ingress Controller#
This instruction will help to ensure security of services that process sensitive data, since HTTPS connections are an important part of a secure web service and guarantee data confidentiality and integrity.
In order to use HTTPS on your Ingress Controller, you need to have:
Domain name for Elastic IP, which was associated with Ingress Controller.
TLS private key and certificate.
In this example, we will complement the Ingress configuration from the above instruction.
To protect Ingress, specify a Kubernetes secret that contains the TLS private key tls.key
and certificate tls.crt
.
Example of the Kubernetes Secret configuration
apiVersion: v1
kind: Secret
metadata:
name: tls-secret
namespace: ingress-nginx
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
type: kubernetes.io/tls
TLS will not work with the rule by default, so the following changes should be made to the Ingress configuration:
Required changes in the Ingress configuration
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- "Your Domain"
secretName: tls-secret
rules:
- host: "Your Domain"
http:
paths:
- path: /example
pathType: Prefix
backend:
serviceName: example-service
servicePort: 80
Required changes in the Ingress configuration
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- "Your Domain"
secretName: tls-secret
rules:
- host: "Your Domain"
http:
paths:
- path: /example
pathType: Prefix
backend:
service:
name: example-service
port:
number: 80
After applying these configurations, you will be able to use the secure HTTPS protocol for Ingress Controller.
EBS-provider#
If you select the EBS-provider option when creating a cluster, the particular service will be deployed in the cluster. It allows Kubernetes to manage volumes in K2 Cloud and use them as Persistent Volumes. The service can operate with the existing volumes or to create them by itself.
The created volumes will be available in the Volumes subsection of the Storage section.
The EBS provider supports the following Kubernetes versions: 1.30.2, 1.29.4, 1.28.9, 1.27.3, 1.26.6 и 1.25.11.
To use volumes as Persistent Volumes in Kubernetes, you need to describe the following configurations:
Storage class is a description of the Storage class. More information on the Storage class can be found in the official documentation.
Persistent Volume is a description of the directly attached volume.
Persistent Volume Claim is a request for Persistent Volume, describing the required volume parameters. If Persistent Volume with the same or better parameters is found, Kubernetes will use it.
Usage scenario for an existing volume in K2 Cloud#
To use the existing volume in Kubernetes as Persistent Volume, specify ebs.csi.aws.com
in the driver field and volume ID in the volumeHandle field:
Example of using an existing volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: static-pv
spec:
capacity:
storage: 48Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: ebs-static
csi:
driver: ebs.csi.aws.com
volumeHandle: vol-9991C120
fsType: xfs
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.ebs.csi.aws.com/zone
operator: In
values:
- ru-msk-vol51
It’s important that in the nodeAffinity section (in the last line of the example above) is specified the availability zone, in which the volume was created. Also the volume and the instance must be located in one availability zone be, otherwise the volume couldn’t be attached to the instance.
To use this volume in the future, you just need to create a Persistent Volume Claim which appropriates parameters of the volume and to use it in the required resource. The storageClassName of this Claim must match the one specified in Persistent Volume.
Configuration example for creating the pod with a volume of more than 20 GiB
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: static-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-static
resources:
requests:
storage: 20Gi
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: static-claim
Scenario with creating new volumes#
To create the new volumes as Storage class specify ebs.csi.aws.com
in the provisioner field. In the parameters field you may specify parameters for the created volumes:
Parameter |
Valid values |
Default value |
Description |
---|---|---|---|
csi.storage.k8s.io/fsType |
xfs, ext2, ext3, ext4 |
ext4 |
File system in which the new volume will be formatted |
type |
io2, gp2, st2 |
gp2 |
Volume type |
iopsPerGB |
IOPS per gibibyte. Required for io2 volumes |
If there is no parameter, it will use the default value.
A configuration example
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-dynamic
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Retain
parameters:
csi.storage.k8s.io/fstype: xfs
type: io2
iopsPerGB: "50"
encrypted: "true"
A Persistent Volume will be created upon a Persistent Volume Claim when creating new volumes.
Persistent Volumes in K2 Cloud support accessModes
only with the ReadWriteOnce
value for EBS; the link to Kubernetes documentation.
Request example for creating the pod with a volume of more than 4 GiB
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-dynamic-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-dynamic
resources:
requests:
storage: 4Gi
When creating a pod, which uses this request, Kubernetes will automatically create in the cloud the 4 GiB volume with the parameters specified in storage class and will attach it to the pod.
Pod configuration example
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-dynamic-claim
Scenario with snapshots#
To take volume snapshots, you must first create a pod with a disk and Storage Class, as well as Volume Snapshot Class.
Example of Volume Snapshot Class configuration for Kubernetes versions 1.20.9 and 1.18.2
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshotClass
metadata:
name: csi-aws-vsc
driver: ebs.csi.aws.com
deletionPolicy: Delete
parameters:
csi.storage.k8s.io/snapshotter-secret-name: aws-secret
csi.storage.k8s.io/snapshotter-secret-namespace: kube-system
Example of Volume Snapshot Class configuration for supported Kubernetes versions.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-aws-vsc
driver: ebs.csi.aws.com
deletionPolicy: Delete
parameters:
csi.storage.k8s.io/snapshotter-secret-name: aws-secret
csi.storage.k8s.io/snapshotter-secret-namespace: kube-system
When creating a Volume Snapshot Class using this claim, a VolumeSnapshotClass object will be automatically created. The same data will be used for authorization in the cloud as for the EBS provider. In addition, you will need Volume Snapshot.
Example of Volume Snapshot configuration for Kubernetes versions 1.20.9 and 1.18.2
apiVersion: snapshot.storage.k8s.io/v1beta1
kind: VolumeSnapshot
metadata:
name: ebs-volume-snapshot-2
namespace: default
spec:
volumeSnapshotClassName: csi-aws-vsc
source:
persistentVolumeClaimName: ebs-dynamic-claim
Example of Volume Snapshot configuration for supported Kubernetes versions.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: ebs-volume-snapshot-2
namespace: default
spec:
volumeSnapshotClassName: csi-aws-vsc
source:
persistentVolumeClaimName: ebs-dynamic-claim
If you use this request to create a Volume Snapshot. In that case, an object of the VolumeSnapshot class and a volume snapshot will be automatically created in the cloud according to the current state of the Persistent Volume Claim in the Kubernetes cluster. Now you can use this Volume Snapshot as a data source (dataSource) for Persistent Volume Claim.
An example of Persistent Volume Claim configuration
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-restore-claim
spec:
dataSource:
name: ebs-volume-snapshot-2
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
accessModes:
- ReadWriteOnce
storageClassName: ebs-dynamic
resources:
requests:
storage: 32Gi
An example of Persistent Volume Claim configuration in the pod configuration
apiVersion: v1
kind: Pod
metadata:
name: app-restored
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage-restored
mountPath: /data
volumes:
- name: persistent-storage-restored
persistentVolumeClaim:
claimName: ebs-restore-claim
Scenario with increasing disk size#
To make it possible to increase the volume size, you must specify the allowVolumeExpansion
field with a value of true
in the Storage Class <https://kubernetes.io/docs/concepts/storage/storage-classes/> configuration.
The file system volume size can only be changed for the xfs, ext3, ext4 file systems.
An example of a pod configuration with a dynamically created volume of 8 GiB, which can be increased in size
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ebs-dynamic
provisioner: ebs.csi.aws.com
allowVolumeExpansion: true
parameters:
csi.storage.k8s.io/fstype: xfs
type: gp2
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ebs-dynamic-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-dynamic
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- name: app
image: centos
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-dynamic-claim
To send a request to increase the size of a previously created volume, you must edit the spec.resources.requests.storage
field in the Persistent Volume Claim configuration. The new value must be larger than the current volume size and be a multiple of 8 GiB.
The Persistent Volume Claim configuration can be edited with the command:
kubectl edit pvc ebs-dynamic-claim
It takes some time to change the volume size. You can find out the result by querying the current Persistent Volume Claim configuration:
kubectl get pvc ebs-dynamic-claim -o yaml
After the operation is complete, the status.capacity.storage
field must contain the new volume size.
Installation of EBS-provider in your Kubernetes Cluster#
You can install the EBS provider separately from the cloud service.
To do this, you need to create a Secret with data for authorizing the user on whose behalf operation with the cloud will be performed:
A configuration example for a secret
apiVersion: v1
kind: Secret
metadata:
name: aws-secret
namespace: kube-system
stringData:
key_id: "<AWS_ACCESS_KEY_ID>"
access_key: "<AWS_SECRET_ACCESS_KEY>"
For correct operation, a user, whose data appears in the key_id
and access_key
fields, must have privileges in the infrastructure service for the following actions:
Note
The EKSCSIPolicy policy grants all necessary permissions.
To check or update the set of actions available to a user, go to the IAM section, open the Projects tab on the user page and click Configure next to the corresponding project. If the list of privileges lacks actions such as create_snapshot
, delete_snapshot
or describe_snapshots
to use volume snapshots or modify_volume
to increase volume size, add them.
To expand available grants, you can also delete the user from the project and then re-add them and assign the EKSCSIPolicy policy. However, the existing EBS providers in deployed clusters will stop working.
After setting the required privileges, apply the configuration (for version 1.25.11 as an example):
kubectl apply -f https://s3.k2.cloud/kaas/latest/deployment/1.25.11/ebs/ebs.yaml
If the installation is successful (pods with the `` ebs-csi- * ‘’ prefix in the name are launched), K2 Cloud volumes will become available for usage in Kubernetes.
To use snapshots, follow these steps:
Apply the configuration (for version 1.25.11 as an example):
kubectl create -f https://s3.k2.cloud/kaas/latest/deployment/1.25.11/ebs/crd/snapshot.storage.k8s.io_volumesnapshotclasses.yaml kubectl create -f https://s3.k2.cloud/kaas/latest/deployment/1.25.11/ebs/crd/snapshot.storage.k8s.io_volumesnapshotcontents.yaml kubectl create -f https://s3.k2.cloud/kaas/latest/deployment/1.25.11/ebs/crd/snapshot.storage.k8s.io_volumesnapshots.yaml kubectl create -f https://s3.k2.cloud/kaas/latest/deployment/1.25.11/ebs/snapshot-controller/rbac-snapshot-controller.yaml kubectl create -f https://s3.k2.cloud/kaas/latest/deployment/1.25.11/ebs/snapshot-controller/setup-snapshot-controller.yaml
If the installation is successful (pod with the prefix snapshot-controller*
in the name will be launched), you will be able to create snapshots in the K2 Cloud for volumes used as Persistent Volume Claim in Kubernetes.
Docker Registry#
Docker Registry is a scalable server application that stores Docker snapshots and allows you to distribute and use them. If you have selected the Docker Registry service when creating a cluster, then it will be installed on a master node.
To upload images from local computer to Docker Registry, install Docker.
Having installed, run the command and enter your password:
docker login <IP-адрес docker-registry>
Then download images by setting a tag starting with <IP address docker-registry>:5000/
. For example, for an existing image quay.io/coreos/example-app:v1.0
a tag will be:
docker tag quay.io/coreos/example-app:v1.0 185.12.31.211:5000/example-app:v1.0
docker push 185.12.31.211:5000/example-app:v1.0
In the future, instead of public IP of the Docker Registry, you can use private IP and vice versa.
Use the regcred credentials configured in the cluster to create the pod from a downloaded image:
A configuration example
apiVersion: v1
kind: Pod
metadata:
name: example
spec:
containers:
- name: example-app
image: '172.31.0.4:5000/example-app:v1.0'
imagePullSecrets:
- name: regcred
NLB Provider#
The NLB Provider allows you to deploy a load balancer for a Kubernetes cluster in K2 Cloud. You can create a load balancer to distribute both internet and internal traffic (for details, see official Kubernetes documentation). The load balancer can be connected to Kubernetes clusters either created using the corresponding K2 Cloud service or deployed by you in the K2 Cloud infrastructure.
Important
In K2 Cloud with Kubernetes clusters, only network load balancers can be used. By default, the only available target type (target-type) is instance
.
Configuring a Kubernetes cluster to work with a load balancer#
To create a load balancer in Kubernetes, add the following manifest for the Service object:
Service configuration example for an internal load balancer
apiVersion: v1
kind: Service
metadata:
name: my-webserver
spec:
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
or:
Service configuration example for an internet-facing load balancer
apiVersion: v1
kind: Service
metadata:
name: my-webserver
annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
spec:
selector:
app: web
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Important
When a load balancer is created, an inbound rule for the load balancer port is added to the security group that was automatically created with the Kubernetes cluster. This rule allows TCP traffic from any IP addresses. To prevent this rule from being added, the Service manifest should include the following annotation: service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: "false"
.
For the load balancer to know where to route traffic, create a deployment in Kubernetes. The label specified in the deployment should be the same as in the load balancer selector (web
in this example). If you do not create a target deployment for the load balancer, the service in Kubernetes and thus the cloud load balancer will not be created.
A Deployment configuration example
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 2
template:
metadata:
labels:
app: web
spec:
containers:
- name: webserver
image: nginx:latest
ports:
- containerPort: 80
Connect NLB Provider to self-deployed Kubernetes cluster#
Если вы самостоятельно развернули кластер Kubernetes в К2 Облаке, то вы также можете использовать облачный балансировщик нагрузки для распределения трафика между его подами. Чтобы установить NLB-провайдер от К2 Облако в свой кластер Kubernetes, необходимо выполнить следующие действия (на примере кластера Kubernetes версии 1.30.2):
Create an individual user and attach EKSNLBPolicy to him/her, or create your own policy with the following permissions:
all privileges to ec2 service;
privileges to elb service.
To the security group assigned to the cluster nodes in K2 Cloud, add a tag with
kubernetes.io/cluster/<cluster name>
key and an empty value.On each node in the cluster (and those to be added in the future), install
providerId
using the command:kubectl patch node <nodename> -p '{"spec": {"providerID": "cloud/<instance-id>"}}'
Apply Deployment configuration to the certificate manager:
kubectl apply -f https://s3.k2.cloud/kaas/v20/deployment/1.30.2/nlb/cert-manager.yaml
Important
Wait for 120 seconds for the certificate manager to complete initialization before taking any further actions.
Download Deployment configuration for NLB secret, add the credentials of the previously created user to it, and apply the configuration using the command:
kubectl apply -f nlb-secret.yaml
For Deployment objects, download the following configurations:
https://s3.k2.cloud/kaas/v20/deployment/1.30.2/nlb/deployment_patch.yaml
https://s3.k2.cloud/kaas/v20/deployment/1.30.2/nlb/kustomization.yaml
Note
In the
deployment_patch.yaml
file, you may need to replace the endpoints and cluster-name with valid ones.To apply the configurations, run the command:
kubectl apply -k <directory>
where
directory
is the directory where the files were uploaded.
A successful installation will start pods with aws-load-balancer-*
and cert-manager-*
name prefixes, whereafter a K2 Cloud load balancer can be used with the Kubernetes cluster.
Kubernetes Dashboard#
Kubernetes Dashboard is a web-based Kubernetes user interface. The dashboard can be used to deploy containerized applications in a Kubernetes cluster, eliminate faults in containerized application, and manage cluster resources. The dashboard can be used to get visibility of the applications running in your cluster and to create or modify certain Kubernetes resources. For example, you can scale a deployment, apply the latest update, restart a module, or deploy new applications.
By default, your Kubernetes cluster already has Kubernetes Dashboard service running. To access it, do the following:
Set up kubectl according to the instructions.
Then copy and paste the http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/# to a browser.
Get a token for authorization in K2 Cloud. For information about the token to access the Kubernetes Dashboard, go to the Kubernetes Clusters section in the Information tab.
If you have limited or no access to K2 Cloud, you can also get a token using the command:
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"