Usage recommendations
In this article:
Usage recommendations#
This section provides guidelines for using message broker services.
Apache Kafka#
Service launch options#
High-availability service deployed in a cluster of six instances in one or three availability zones.
High-availability solution architecture#
There are two types of nodes in a Kafka cluster:
controllers (coordinators);
brokers (data servers).
Controllers are used to maintain cluster quorum, store Kafka metadata, and replicate data. Even a single-node Kafka installation needs a controller, in which case, both roles are imposed on a single node.
Controllers can be of two types:
traditional Zookeeper controller;
built-in Kafka Kraft controller, which has replaced Zookeeper.
K2 Cloud supports Kafka Kraft controller, which is free from some of the shortcomings of its predecessor.
For production environments, combining the roles of controller and broker on one node is prohibited. Therefore, when deploying the service, three separate coordinator nodes and three broker nodes are created. Controllers interact with each other using Raft consensus protocol and ensure the functioning of data nodes (brokers). Using three brokers allows you to split topics into partitions and replicate the partitions across three availability zones (when deploying a cluster in three availability zones).
Connection to the service#
Once deployed, Kafka listens for client connections on port tcp/9092 on each instance having a broker role. Controllers do not interact with clients and do not respond to client connections.
In case of a clustered solution, the addresses of all cluster brokers must be used by application. In the event of a failure of any node, this allows for automatic failover to a healthy cluster node. As a rule, client programs and libraries working with Kafka allow you to specify several connection points.
Monitoring#
To monitor the service, we recommend enabling Prometheus cloud monitoring service. If you want to use your own monitoring server, connect it to the Prometheus exporter kafka_exporter
, which is installed with Apache Kafka on broker nodes and runs on port tcp/9308.
RabbitMQ#
Service launch options#
Standalone caching instance in the selected availability zone.
High-availability service deployed in a cluster of three instances minimum in three Availability Zones.
High-availability solution architecture#
All the nodes of the cluster where a high-availability RabbitMQ service is deployed are equal in terms of functionality. A node can be a RAM node or disk node (disc node). In the first case, its state is stored in RAM (except for when Persistent Queue is used or when the size of a queue is too large for RAM). In the second case, the data is stored on the disk and in RAM.
To avoid data loss, the cluster should have at least one disk node. In K2 Cloud, a cluster includes two RAM nodes and one disc node. In addition, to ensure the highest availability, the replication policy is configured in such a way that all the queues are replicated to all the cluster nodes (ha-node = all
, ha-sync-mode = automatic
). Thus, the failure of two cluster nodes out of three will not result in data loss.
Authentication and security#
When deploying a new RabbitMQ service, set the admin
password first to be able to connect then to the RabbitMQ web interface and manage its settings. The admin
user has read/write access to all queues of any virtual host (vhost
) and may also grant/modify permissions for other users.
During service deployment, in addition to the admin
user, an embedded guest
user with equivalent rights is created. However, RabbitMQ enables guest user access via localhost
only, but not remotely. You can leave the guest
user as-is (make no changes to the default password or permissions), as no outsider can connect using this account.
Avoid using the admin
account for working with RabbitMQ service from your apps. For this purpose, you’d better create separate users and restrict their rights accordingly. Avoid admin
password loss, as this account has unrestricted privileges.
If an Elastic IP address is assigned to one or more virtual machines on which PaaS RabbitMQ runs, this service may become publicly available. Therefore, you must use cloud security groups so that your services could be accessed only from the addresses and networks you use.
Connection to the service#
Once deployed, RabbitMQ service receives client connections over TCP on port 5672 in every instance where it runs.
In case of a high-availability solution, specify the addresses of all the VMs in the cluster on app side to ensure the automated failover to a running node if any node fails.
For usability, RabbitMQ high-availability service has an HAProxy installation inside that monitors the service health and automatically redirects requests to running instances. HAProxy service runs on every of the cluster’s nodes on tcp/5000 port. Thus, you can access the port on any node, and HAProxy mechanism will redirect the request to a running RabbitMQ instance.
In addition, a management interface, which can be accessed from other nodes via the rabbitmqctl
utility or browser, is enabled on tcp/15672 port. To open the web interface for managing RabbitMQ settings, open the browser and enter the following address http://<vm-address>:15672, where <vm-address>
is a private or public IP address of a virtual machine on which the service runs. The public address can be useful if you are connecting over the Internet. You can manage RabbitMQ cluster by connecting to any of the cluster’s instances.
Monitoring#
To monitor the service, we recommend enabling Prometheus cloud monitoring service. If you want to use your own monitoring server, connect it to the Prometheus exporter rabbitmq_exporter
, which is installed with RabbitMQ and runs on port tcp/9090.