Logging#

General information#

ELK cloud service allows for the centralized and automated data collection from PaaS service logs. In addition, it can be used to extract data from logs of other systems. The service is based on Elastic Stack consisting of:

  • Elasticsearch to store and index logs;

  • Logstash to filter and process logs;

  • Kibana to visualize retrieved data.

Activity logs are integral to the monitoring of the overall system health. When running, applications and services log various debugging information, save error messages and warnings, and log actions and operations. This helps quickly identify service problems and analyze any abnormal situation, be it a failure or performance degradation. In addition, based on the data stored, you can make forecasts and take measures to prevent problems in the future.

Analyzing a problem can take a lot of time and effort if you manually scrutinize log files of different components from multiple servers and try to find correlations. Moreover, programs often log a lot of excessive data that is only needed under certain circumstances. In such cases, it is critical to be able to quickly find and filter data to extract the required information from different sources.

Kibana’s advanced capabilities allow you to perform a full-text search across all logs, select only desired time intervals, show/hide individual message fields, and count the number of events. This helps quickly and easily filter the required information, highlight it in search results, and, if necessary, visualize as charts and graphs.

ELK service is easy-to-use and scalable. It helps timely monitor the health of various services and receive application performance data in a convenient form for analysis and decision-making in order to optimize application performance and ensure the service uptime through quick troubleshooting.

ELK service supports the following Elasticsearch versions:

  • 7.11.2;

  • 7.12.1;

  • 7.13.1;

  • 7.14.2;

  • 7.15.2;

  • 7.16.3;

  • 7.17.4;

  • 8.0.1;

  • 8.1.3.

Before you begin#

To get started with the logging service, follow these steps:

  1. Create a project, if you don’t have one.

  2. In the IAM section, create a user, add it to the created project, and assign the PaaSServicePolicy policy to it for granting the required privileges to work with PaaS services.

    Note

    If you want to give the user access to other cloud services, then add this user to the CloudAdministrators group. In this case, assigning the PaaSServicePolicy policy is not required.

  3. Make sure that the project has all the required resources – subnets, SSH keys, and security groups. Otherwise, create them.

  4. Read the recommendations on how to work with the logging service in the cloud.

ELK service launch#

To launch the service, go to the Service store or Running services subsection, select ELK service in the Logging tab and click Create.

The service launch procedure comprises the following stages:

  1. Set the network parameters required for ELK service:

    • The configuration of the cluster in which the service will be deployed. The following options are available for selection:

      • single-node (non-high-availability service);

      • three notes in one availability zone;

      • three nodes in three availability zones;

    • VPC where the service will be deployed.

    • Security groups to control traffic through interfaces of the instances on which ELK service will run.

    • Subnets to which instances with the running service will be attached, or network interfaces through which cluster nodes will be attached to subnets.

    Note

    To run the service in the selected VPC, you must first create a subnet in the preferred availability zone (in a configuration with one zone), or one subnet in each availability zone (in a configuration with three zones). In addition, the same volume types must be supported in the availability zones used.

    Note

    The ability to attach network interfaces may be useful, for example, when you need to recreate the cluster where the logging service has been deployed. If you delete a service, but do not delete attached network interfaces, you will be able to reuse them for connecting nodes of a new cluster to subnets, where the new cluster will be deployed. Thus, you can keep previous network settings, such as private IP addresses and security groups, rather than configure them again.

    • Internal and/or internet-facing load balancers when the three-node cluster configuration is selected (for details, see Load balancer management).

  2. Specify the configuration of the instance or instances where the data search and analysis service will run. Select the instance type and parameters of its volumes: type, size and IOPS (if available for the selected type).

    Note

    The ELK service performance depends on the node components. We recommend using the Memory Optimized instance type.

    In addition, you can specify an SSH key. In this case, after automatic service configuration, you will have SSH access to the respective instances.

    Attention

    We provide the option to connect to instances using an SSH key while the new ELK service is beta testing. This feature may be disabled in the future.

    To enable a configuration with cluster arbitrator, tick the Use arbitrator checkbox.

  3. Set the main service parameters:

    • Service name – any unique name for the caching service.

    • Elasticsearch version.

    • Allowing anonymous access in Kibana. When selecting this option, also select a role for anonymous access with viewing (viewer) or editing (editor) rights.

    • Monitoring agent installation option. For centralized monitoring of a logging service, first deploy the Prometheus-based monitoring service. Upon selecting this option, also select the monitoring service you want to use. Optionally, you can set monitoring labels, which the installed monitoring agents will assign to collected metrics (for details, see Using labels).

    • Elasticsearch superuser password. You can set it manually or generate automatically. If a password is set, authentication is required to log in to Kibana.

  4. Set the advanced settings, if necessary. Click advanced settings and enter the settings and their values.

    Important

    The specified settings will be a part of the service configuration and, therefore, will affect its operation. Add only the settings you really need.

  5. Click Create.

    Note

    The service launching process usually takes 5 to 15 minutes.

Load balancer management#

For ELK service in a high-availability configuration, you can create an internal and/or internet-facing load balancer (for details, see load balancers). They automatically distribute incoming requests among fully functional cluster nodes (requests are not sent to the arbitrator if there is any).

Load balancers are created automatically; their parameters and associated resources cannot be changed. Information about the created load balancers, in particular their DNS names, can be found in the Load balancers tab on the service page.

Attention

Load balancer running together with PaaS service can be deleted only on the page of this service.

Important

To run the external load balancer, give external access to service ports, which is denied by default. Add enabling rule for the corresponding ports to the security group that was specified when creating the service. The ports listened by ELK service can be found in the Information tab on its page.

Create a load balancer#

A logging service load balancer can only be created when the service is in Running status. You can create one internal and one external load balancer per service.

Important

You can create an internal load balancer only if route propagation is enabled in VPC.

  1. Go to PaaS Running services and open the Logging tab.

  2. Find the desired service in the table and click the service ID to go to its page.

  3. Open the Load balancers tab and click Create.

  4. In the window that opens, select a balancer you want to create. If none has been created yet, you can create both internal and internet-facing balancers at once.

  5. Click Create to complete the action.

Delete load balancer#

Load balancer associated with the service can only be deleted when the service is in Running status.

  1. Go to PaaS Running services and open the Logging tab.

  2. Find the desired service in the table and click the service ID to go to its page.

  3. Open the Load balancers tab and click Delete.

  4. In the window that opens, select a load balancer you want to delete. If two load balancers have been created for the service, you can delete both at the same time.

  5. Click Delete to confirm the action.

Processing the event logs#

Logstash component allows you to filter, aggregate, and modify event logs before they are sent to Elasticsearch. You can configure a pipeline to automatically pre-process logs. The pipeline defines the data inflow processing rules and can contain input, filter, output, and codec blocks. Plug-ins from the input block generate events, filter ensures data pre-processing, and output forwards data to the final destination. The codec changes data representation and can be applied to input and/or output to decode/encode data.

To create a pipeline, describe its configuration, namely, the plugins to be used along with their settings. For details about Logstash features, available plugins, and their configuration options, see the official documentation.

ELK service uses a preconfigured pipeline to automatically connect and process event logs from other PaaS services. It receives input data from filebeat agents and, depending on the settings and the version of Elasticsearch used, writes the data either to an index or to a data stream named <beatname>-<version>. The output plugin is configured to log to the Elasticsearch repository all events that have an index field in the @metadata.

You can use a preconfigured output plugin to log events from your own systems to ELK, even if such events are not generated by -beat agents (filebeat, metricbeat, auditbeat), as long as their metadata contains the index field. Alternatively, you can add your own plugins to output, but remember that each new plugin needs a new client connection for logging to Elasticsearch. To use the ELK service more efficiently, it is advisable to minimize the number of individual connections, so it is better to reuse existing output plugins instead of creating new ones.

Create a pipeline#

To set a pipeline configuration:

  • Go to the section PaaS Running services.

  • Open the Logging tab and click the ELK service name to go to its page.

  • Open the Pipelines tab and click Create.

  • In the dialog that opens, specify the pipeline name and configuration.

  • Click Create.

Modify a pipeline#

To modify a pipeline configuration:

  • Go to the section PaaS Running services.

  • Open the Logging tab and click the ELK service name to go to its page.

  • Open the Pipelines tab, select the pipeline from the list and click Modify.

  • Edit the pipeline configuration.

  • Click Save.

Delete a pipeline#

To delete a pipeline:

  • Go to the section PaaS Running services.

  • Open the Logging tab and click the ELK service name to go to its page.

  • Open the Pipelines tab, select a pipeline from the list, and click Delete. You can select multiple pipelines for deletion at the same time.

  • In the dialog window, confirm the action.

ELK service configuration#

If you have not enabled monitoring when creating the logging service, or you want to disable it, you can do it when the service is in the Ready state.

Note

To enable monitoring, deploy the Prometheus-based monitoring service first.

Important

If an attempt to modify monitoring settings fails, then the service will be reset to default ones.

To enable monitoring for the logging service:

  1. Go to PaaS Running services and open the Logging tab.

  2. Find the desired service in the table and click the service ID to go to its page.

  3. Open the Parameters tab and click Modify.

  4. In the window that opens, you can configure monitoring (or disable it if it’s already enabled).

  5. To save settings, click Modify.

Connecting snapshot repository#

Elasticsearch built-in snapshot creation mechanism can backup indexes, data flows, and Elasticsearch cluster state. The resultant snapshots can be used to recover data after failure or migrate Elasticsearch data between ELK services. K2 Cloud automates the creation and registration of the snapshot repository for an ELK service.

Snapshots are stored in object storage buckets. Along with a repository, a directory is created in the bucket and mounted on each node of the Elasticsearch cluster as an s3fs file system. Thus, you can use the same bucket to store snapshots of different services and types. Storing snapshots as files simplifies snapshot sharing: users can upload their own snapshots for service recovery or download existing ones.

Cloud tools can be used only to connect a snapshot repository, so Elasticsearch built-in tools should be used to manage the snapshots. Snapshots can be created using Elasticsearch own API or Snapshot Lifecycle Management (SLM).

Add a snapshot repository#

You can create a new snapshot repository or connect an existing one of another service. To add the repository, follow two steps: first create/connect it and then register in Elasticsearch.

The ability to connect an existing repository can be useful for recovering Elasticsearch data or migrating it from one ELK service to another. You can also reuse an existing repository if it is not registered to another service.

Note

If the repository is registered to another service, it can be connected as read-only. Only one repository can be connected to a service, so disconnect the repository after data recovery/migration and create a new one if you want to back up the service.

Add a new repository#

  1. Go to PaaS Running services and open the Logging tab.

  2. Find the desired service in the table and click the service ID to go to its page.

  3. Open the Snapshot repositories tab and click Create.

  4. In the window that opens, set the following parameters (the Existing repository option should be disabled):

    • The bucket where the snapshot repository will be created.

    • User with PaaS Backup User rights; backups will be written to the bucket under this user.

    • The directory that will be mounted as the repository.

  5. Click Add to create the repository.

  6. Wait until the service status changes to Running and enable Repository registration.

Add a registered repository#

You can connect a registered repository to another service only as a read-only repository

  1. Go to PaaS Running services and open the Logging tab.

  2. Find the desired service in the table and click the service ID to go to its page.

  3. Open the Snapshot repositories tab and click Create.

  4. In the window that opens, select the Existing repository checkbox and in the Recovery service field select the service, whose storage you want to connect.

  5. Click Add to create the repository.

  6. Wait until the service status changes to Running and enable Repository registration.

Add an unregistered repository#

The process of connecting an unregistered repository is similar to creating a new one. The difference is that in this case, you have to specify an existing directory previously used as the snapshot repository.

  1. Go to PaaS Running services and open the Logging tab.

  2. Find the desired service in the table and click the service ID to go to its page.

  3. Open the Snapshot repositories tab and click Create.

  4. In the window that opens, set the following parameters (the Existing repository option should be disabled):

    • The bucket where the snapshot repository will be created.

    • User with PaaS Backup User rights; backups will be written to the bucket under this user.

    • The directory previously used as the repository.

  5. Click Add to create the repository.

  6. Wait until the service status changes to Running and enable Repository registration.

Disconnect a snapshot repository#

Disconnect the repository in the reverse order: first unregister the repository in the service, and then remove it. The file system is unmounted, but the snapshot directory is not deleted.

  1. Go to PaaS Running services and open the Logging tab.

  2. Find the desired service in the table and click the service ID to go to its page.

  3. Open the Snapshot repositories tab and disable Snapshot registration

  4. Wait until the service status changes to Running and click Delete.

  5. In the dialog window, confirm the deletion.

Environment upgrade#

PaaS services are updated regularly. If you want an already deployed ELK service to support new features, then update its environment. For the current environment version, see the Information tab on the service page.

Note

All services with environment version 3_6 and higher support environment update. It is also available for some previously deployed services with environment version 3_5. To check if you can update them, use the API method DescribeService: the response should contain the common:update_environment value in the SupportedFeatures list.

Important

If an attempt to update the environment fails, the service will be reset to the default one.

To update the environment version:

  1. Go to PaaS Running services and open the Logging tab.

  2. Find the desired service in the table and click the service ID to go to its page.

  3. In the Information tab, click Update environment version.

  4. In the window that opens, select the version from the list, to which you want to upgrade the current environment.

  5. Click Update to change the version.

Deleting ELK service#

Deleting ELK service deletes all instances and volumes created with it.

You can delete the service using one of the following methods.

  1. Go to the section PaaS Running services.

  2. Open the Logging tab.

  3. Find the service in the table and click on the icon .

  1. Go to the section PaaS Running services.

  2. Open the Logging tab.

  3. Find the service in the table and go to the service page.

  4. Click :Delete in the Information tab.