Amazon ElastiCache – An Introduction

Caching is a technique to store frequently accessed information in a temporary memory location on a server. Read-intensive web applications are the best use-case candidates for a cache service.

In a web driven world, catering to users’ requests in real time is the goal of every website. Because performance and speed are required, a caching layer, like Amazon ElastiCache, is the first tool that every website employs in serving mostly static and frequently accessed data. So what benefits does a caching layer provide a user?

Caching is a technique to store frequently accessed information, html pages, images, and other static information in a temporary memory location on the server. Read-intensive web applications are the best use-case candidates for a cache service. There are a number of caching servers used across applications, the most notable are Memecached, Redis, and Varnish. I suggest you have a look at a previous post about Amazon storage options because it relates to this post.

There are various ways to implement caching using those technologies. However, with such large number of organizations moving their infrastructure to cloud, many cloud vendors are also providing caching as a service. Amazon ElastiCache is one such popular web caching service which provides users with Memcached or Redis-based caching that supports installation, configuration, HA, Caching failover and clustering.  Let’s talk a bit about Memcached and Redis before exploring the AWS versions.

Memcached:

Memcached is an open source, distributed, in-memory key-value store-caching system for small arbitrary data streams flowing from database calls, API calls, or page rendering. Memcached has long been the first choice of caching technology for users and developers around the world.

Redis:

Redis is a newer technology and often considered as a superset of Memcached. That means Redis offers more and performs better than Memcached. Redis scores over Memcached in few areas that we will discuss briefly.

  • Redis implements six fine-grained policies for purging old data, while Memcached uses the LRU (Least Recently Used) algorithm.
  • Redis supports key names and values up to 512 MB, whereas Memcached supports only 1 MB.
  • Redis uses a hashmap to store objects whereas Memcached uses serialized strings.
  • Redis provides a persistence layer and supports complex types like hashes, lists (ordered collections, meant for queue), sets (unordered collections of non-repeating values), or sorted sets (ordered/ranked collections of non-repeating values).
  • Redis is used for built-in pub/sub, transactions (with optimistic locking), and Lua scripting.
  • Redis 3.0 supports clustering.

Amazon Elasticache:

Amazon ElastiCache is a Caching-as-a-Service from Amazon Web Services. AWS simplifies setting up, managing, and scaling a distributed in-memory cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution. AWS removes the complexity associated with deploying and managing a distributed cache environment.
From the Amazon ElastiCache documentation, we learn that ElastiCache has features to enhance reliability for critical production deployments, including:

  • Automatic detection and recovery from cache node failures.
  • Automatic failover (Multi-AZ) of a failed primary cluster to a read replica in Redis replication groups.
  • Flexible Availability Zone placement of nodes and clusters.
  • Integration with other Amazon Web Services such as Amazon EC2, CloudWatch, CloudTrail, and Amazon SNS, to provide a secure, high-performance, managed in-memory caching solution.

Amazon ElastiCache provides two caching engines, Memcached and Redis. You can move your existing Memcached or Redis caching implementation to Amazon ElastiCache effortlessly. Simply change the Memcached/Redis endpoints in your application.

ElastiCache
Before implementing Amazon ElastiCache, let’s get familiar with a few related terms:

ElastiCache Node:

Nodes are the smallest building block of Amazon ElastiCache service, which are typically network-attached RAMs (each having an independent DNS name and port).

ElastiCache Cluster:

Clusters are logical collections of nodes. If your ElastiCache cluster is of Memcached nodes, you can have nodes in multiple availability zones (AZs) to implement high-availability. In the case of a Redis cluster, the cluster is always a single node. You can have multiple replication groups across AZs.
A Memcached cluster has multiple nodes whose cached data is horizontally partitioned across each node. Each of the nodes in the cluster is capable of read and write.

A Redis cluster, on the other hand, has only one node, which is the master node. Redis clusters do not support data partitioning. Rather, there are up to five replication nodes in-replication groups which are read-only. They maintain copies from the master node which is the only writeable node. The data are copied asynchronously to the read-replicas, and read requests are spread across read-replica clusters. Having multiple read replicas in different AZs enables high availability.

ElastiCache Memcached:

Until now we have discussed both the caching engines, but I may seem biased towards Redis. So the question is, if Redis is all powerful, then why doesn’t ElastiCache provide only Redis? There are few good reasons for using Memcached:

  • It is the simplest caching model.
  • It is helpful for people needing to run large nodes with multiple cores or threads.
  • It offers the ability to scale out/in, adding and removing nodes on-demand.
  • It handles partitioning data across multiple shards.
  • It handles cache objects, such as a database.
  • It may be necessary to support an existing Memcached cluster.

Memcached
Each node in a Memcached cluster has its own endpoint. The cluster also has an endpoint called the configuration endpoint. If you enable Auto Discovery and connect to the configuration endpoint, your application will automatically know each node endpoint – even after adding or removing nodes from the cluster. The latest version of Memcached supported in ElastiCache is 1.4.24.
In a Memcached-based ElastiCache cluster, there can be a maximum of 20 nodes where data is horizontally partitioned. If you require more, you’ll have to request a limit increase via the ElastiCache Limit Increase Request form.

Apart from that, you can upgrade the Memcached engine. Keep in mind that the Memcached engine upgrade process is disruptive. The cached data is lost in any existing cluster when you upgrade.
Changing the number of nodes in a cluster is only possible for a Memcached-based ElastiCache cluster. However, this operation requires careful design of the hashing technique you will use to map the keys across the nodes. One of the best techniques is to use a consistent hashing algorithm for keys.

Consistent hashing uses an algorithm such that whenever a node is added or removed from a cluster, the number of keys that must be moved is roughly 1 / n (where n is the new number of nodes). Scaling from 1 to 2 nodes results in 1/2 (50 percent) of the keys being moved — the worst case. Scaling from 9 to 10 nodes results in 1/10 (10 percent) of the keys being moved. An unsuitable algorithm will result in heavy cache misses, thus increasing the load on a database and defeating the purpose of a caching layer.

ElastiCache Redis:

We have discussed Redis and the replication groups earlier. All things considered, Redis will normally be the better choice:

  • Redis supports complex data types, such as strings, hashes, lists, and sets.
  • Redis sorts or ranks in-memory data-sets.
  • Redis provides persistence for your key store.
  • Redis replicates the cache data from the primary to one or more read replicas for read intensive applications.
  • Redis has automatic fail-over capabilities if the primary node fails.
  • Redis has publish and subscribe (pub/sub) capabilities where the client is informed of events on the server.
  • Redis has back-up and restore capabilities.

Redis
Currently, Amazon ElastiCache supports Redis 2.8.23 and lower. Redis-2.8.6 and higher is a significant step up because a Redis cluster on version 2.8.6 or higher will have Multi-AZ enabled. Upgrading is a non-disruptive process and the cache data is retained.
If you want to persist the cache data, Redis has something called Redis AOF (Append Only File). AOF file is useful in recovery scenarios. In case of a node restart or service crash, Redis will replay the updates from an AOF file, thereby recovering the data lost. But AOF is not useful in the event of a hardware crash and AOF operations are slow.

A better way is to have a replication group with one or more read replicas in different availability zones and enable Multi-AZ instead of using AOF. Because there is no need for AOF in this scenario, ElastiCache disables AOF on Multi-AZ replication groups.

All the nodes in a replication group reside in the same region but in multiple availability zones (AZs). An ElastiCache replication group consists of a primary cluster and up to five read replicas. In the case of a primary cluster or availability zone failure, if your replication group is Multi-AZ enabled, ElastiCache will automatically detect the primary cluster’s failure, select a read replica cluster, and promote it to primary cluster so that you can resume writing to the new primary cluster as soon as promotion is complete.
ElastiCache also propagates the DNS of the promoted replica so that, if your application is writing to the primary endpoint, no endpoint change will be required in your application. Make sure that your cache engine is Redis-2.8.6 or higher and have instance types higher than t1 and t2 nodes.

Redis cluster supports backup and restore processes. It is useful when you want to create a new cluster from existing cluster data.

Conclusion:

Amazon ElastiCache offloads the management, monitoring, and operation of caching clusters in the cloud. It has detailed monitoring via Amazon CloudWatch without any extra cost overhead and is a pay-as-you-go service. I encourage you to use ElasticCache for your cloud based web applications requiring split-second response times. I suggest you take a look at a post Michael Sheehy wrote in the spring on Amazon Storage Options because it offers a broad view of the topic in a very short article.  If you want to more background in a structured framework,  Cloud Academy has a course, Storage on AWS that is well regarded and they offer a 7-day free trial.

Cloud Academy also has a strong series of hands-on labs focused on Amazon Web Services, ElastiCache. These labs are a great way of putting what you just learned to practice immediately. Good luck and let us know what you think. We believe in continuous learning and your feedback is invaluable. 

Cloud Academy