Usage recommendations
In this article:
Usage recommendations#
This section provides recommendations on using caching service in K2 Cloud.
Memcached#
Service launch options#
Standalone caching instance deployed on a virtual machine instance in the selected availability zone.
If necessary, you can run several standalone instances in different Availability Zones and aggregate them into a pool using third-party tools, such as mcrouter or twemproxy.
Connecting to Memcached#
Once the PaaS service is successfully launched, the service page in the web interface will display the address and port that Memcached is listening to. As a rule, Memcached listens to the tcp/11211
port.
Service monitoring#
To monitor the service, we recommend using Prometheus cloud monitoring service. If you want to connect your own monitoring server, you can use mcrouter to monitor the state of the instances and check their health.
Redis#
Service launch options#
Standalone caching instance deployed on a virtual machine instance in the selected availability zone.
High-availability service deployed in a cluster of three instances in three availability zones.
High-availability solution architecture#
There are two high-availability architecture options available for Redis: 1) using Sentinel monitoring service and 2) featuring native clustering (in Redis terminology) starting from Redis 3.0. The first option enables high availability for small installations. The second one provides large installations with better scalability and performance.
Redis Sentinel#
K2 Cloud features three-node Sentinel configuration, with master running on one of them and replicas running on the other two. Both Redis server and Redis Sentinel run on every node. In case of failures and other malfunctions, Sentinel will start automatic failover. For details about Sentinel functionality, see the official documentation.
Sentinel processes interact with each other and monitor the performance of Redis nodes. If Redis node, server or Sentinel service fail, available Sentinel processes start reselecting the master. Then, automatic failover starts, the selected replica becomes the new master while the remaining replica is reconfigured to work with this new master. The applications that use Redis server are notified of the new connection address.
Note
Redis uses asynchronous replication, so such a distributed system of Redis nodes and Sentinel processes does not guarantee that confirmed transactions will be actually saved in case of failure.
High-availability Redis service#
A high-availability Redis service is deployed in a cluster consisting of three instances. Each instance has one master and one replica, and the latter interacts with a master on another node (cross-replication).
Multiple master nodes ensure high performance and linear scalability for a cluster, while having at least one replica per master ensures its high availability. For more information, read the Redis official documentation.
All master nodes share a single data space, which is divided into shards. Thus, each master contains part of the shards (sharding).
When working with a Redis service, requests are processed by masters. Any master can be accessed for read and write operations. If the requested data is in a shard on another master, then Redis itself will redirect the client to the respective instance to read them.
In the event of a failure of a master instance or the entire node on which the instance is running, automatic failover occurs to a respective replica, which takes on the master role.
Request distribution using HAProxy#
For simplicity and ease of use of the clustered Redis service, we built an HAProxy installation into it. HAProxy monitors node statuses and roles and automatically distributes requests to only those VM instances acting as masters. HAProxy runs on the tcp/5000
port on all cluster nodes. Thus, this port can be accessed from any node, and HAProxy will request one of the master nodes.
To ensure maximum availability – which is the key advantage of a high availability service – use HAProxy ports of all nodes rather than any one of them. The most convenient way to do this is to use another balancing service configured on your application side. You can use an integrated K2 Cloud load balancer or deploy your own.
Connect to caching service#
After a successful launch, the listened addresses and ports can be found on the Redis caching service page. In case of a standalone service, only one endpoint is available for connection, while in case of a high-availability service, there are multiple endpoints depending on the selected architecture.
Redis master instances listen to the tcp/6379
port, while replicas listen to the tcp/6380
port. In the event of a failure of a master instance or the entire node on which the instance is running, automatic failover occurs: the replica becomes a master. Still, it continues to listen to the tcp/6380
port. Therefore, the port number alone is not enough to understand whether an instance is a master or replica if the cluster runs for a long time.
Important
The cluster allows you to maintain the availability for caching service even when a node fails, but only if you use all cluster endpoints rather than any one of them.
Authentication in Redis#
When working in a closed environment, you don’t need to enable authentication in Redis, but you may need to password-protect your installation in an untrusted environment.
You can set the password when creating the database service. After the service has been launched with the specified password, you can access it after authentication (the AUTH
command’).
The deployed Redis service can be managed via the standard redis-cli console client in Linux or through graphical tools. A good overview of graphical tools can be found in the blog.
If you want to connect to the Redis service remotely, be sure to assign an Elastic IP to the instance where the service is deployed and allow connections to the tcp/6379-6380
ports in the appropriate security group.
Important
Be careful when permitting access to Redis from the Internet. We recommend restricting access to individual IP addresses and providing strong passwords.