Caching is a technique to store frequently accessed information in a temporary memory location on a server. Read-intensive web applications are the best use-case candidates for a cache service.
In a web driven world, catering to users’ requests in real time is the goal of every website. Because performance and speed are required, a caching layer, like Amazon ElastiCache, is the first tool that every website employs in serving mostly static and frequently accessed data. So what benefits does a caching layer provide a user?
Caching is a technique to store frequently accessed information, html pages, images, and other static information in a temporary memory location on the server. Read-intensive web applications are the best use-case candidates for a cache service. There are a number of caching servers used across applications, the most notable are Memecached, Redis, and Varnish. I suggest you have a look at a previous post about Amazon storage options because it relates to this post.
There are various ways to implement caching using those technologies. However, with such large number of organizations moving their infrastructure to cloud, many cloud vendors are also providing caching as a service. Amazon ElastiCache is one such popular web caching service which provides users with Memcached or Redis-based caching that supports installation, configuration, HA, Caching failover and clustering. Let’s talk a bit about Memcached and Redis before exploring the AWS versions.
Memcached is an open source, distributed, in-memory key-value store-caching system for small arbitrary data streams flowing from database calls, API calls, or page rendering. Memcached has long been the first choice of caching technology for users and developers around the world.
Redis is a newer technology and often considered as a superset of Memcached. That means Redis offers more and performs better than Memcached. Redis scores over Memcached in few areas that we will discuss briefly.
- Redis implements six fine-grained policies for purging old data, while Memcached uses the LRU (Least Recently Used) algorithm.
- Redis supports key names and values up to 512 MB, whereas Memcached supports only 1 MB.
- Redis uses a hashmap to store objects whereas Memcached uses serialized strings.
- Redis provides a persistence layer and supports complex types like hashes, lists (ordered collections, meant for queue), sets (unordered collections of non-repeating values), or sorted sets (ordered/ranked collections of non-repeating values).
- Redis is used for built-in pub/sub, transactions (with optimistic locking), and Lua scripting.
- Redis 3.0 supports clustering.
Amazon ElastiCache is a Caching-as-a-Service from Amazon Web Services. AWS simplifies setting up, managing, and scaling a distributed in-memory cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution. AWS removes the complexity associated with deploying and managing a distributed cache environment.
From the Amazon ElastiCache documentation, we learn that ElastiCache has features to enhance reliability for critical production deployments, including:
- Automatic detection and recovery from cache node failures.
- Automatic failover (Multi-AZ) of a failed primary cluster to a read replica in Redis replication groups.
- Flexible Availability Zone placement of nodes and clusters.
- Integration with other Amazon Web Services such as Amazon EC2, CloudWatch, CloudTrail, and Amazon SNS, to provide a secure, high-performance, managed in-memory caching solution.
Amazon ElastiCache provides two caching engines, Memcached and Redis. You can move your existing Memcached or Redis caching implementation to Amazon ElastiCache effortlessly. Simply change the Memcached/Redis endpoints in your application.
Before implementing Amazon ElastiCache, let’s get familiar with a few related terms:
Nodes are the smallest building block of Amazon ElastiCache service, which are typically network-attached RAMs (each having an independent DNS name and port).
Clusters are logical collections of nodes. If your ElastiCache cluster is of Memcached nodes, you can have nodes in multiple availability zones (AZs) to implement high-availability. In the case of a Redis cluster, the cluster is always a single node. You can have multiple replication groups across AZs.
A Memcached cluster has multiple nodes whose cached data is horizontally partitioned across each node. Each of the nodes in the cluster is capable of read and write.
A Redis cluster, on the other hand, has only one node, which is the master node. Redis clusters do not support data partitioning. Rather, there are up to five replication nodes in-replication groups which are read-only. They maintain copies from the master node which is the only writeable node. The data are copied asynchronously to the read-replicas, and read requests are spread across read-replica clusters. Having multiple read replicas in different AZs enables high availability.
Until now we have discussed both the caching engines, but I may seem biased towards Redis. So the question is, if Redis is all powerful, then why doesn’t ElastiCache provide only Redis? There are few good reasons for using Memcached:
- It is the simplest caching model.
- It is helpful for people needing to run large nodes with multiple cores or threads.
- It offers the ability to scale out/in, adding and removing nodes on-demand.
- It handles partitioning data across multiple shards.
- It handles cache objects, such as a database.
- It may be necessary to support an existing Memcached cluster.
Each node in a Memcached cluster has its own endpoint. The cluster also has an endpoint called the configuration endpoint. If you enable Auto Discovery and connect to the configuration endpoint, your application will automatically know each node endpoint – even after adding or removing nodes from the cluster. The latest version of Memcached supported in ElastiCache is 1.4.24.
In a Memcached-based ElastiCache cluster, there can be a maximum of 20 nodes where data is horizontally partitioned. If you require more, you’ll have to request a limit increase via the ElastiCache Limit Increase Request form.
Apart from that, you can upgrade the Memcached engine. Keep in mind that the Memcached engine upgrade process is disruptive. The cached data is lost in any existing cluster when you upgrade.
Changing the number of nodes in a cluster is only possible for a Memcached-based ElastiCache cluster. However, this operation requires careful design of the hashing technique you will use to map the keys across the nodes. One of the best techniques is to use a consistent hashing algorithm for keys.
Consistent hashing uses an algorithm such that whenever a node is added or removed from a cluster, the number of keys that must be moved is roughly 1 / n (where n is the new number of nodes). Scaling from 1 to 2 nodes results in 1/2 (50 percent) of the keys being moved — the worst case. Scaling from 9 to 10 nodes results in 1/10 (10 percent) of the keys being moved. An unsuitable algorithm will result in heavy cache misses, thus increasing the load on a database and defeating the purpose of a caching layer.
We have discussed Redis and the replication groups earlier. All things considered, Redis will normally be the better choice:
- Redis supports complex data types, such as strings, hashes, lists, and sets.
- Redis sorts or ranks in-memory data-sets.
- Redis provides persistence for your key store.
- Redis replicates the cache data from the primary to one or more read replicas for read intensive applications.
- Redis has automatic fail-over capabilities if the primary node fails.
- Redis has publish and subscribe (pub/sub) capabilities where the client is informed of events on the server.
- Redis has back-up and restore capabilities.
Currently, Amazon ElastiCache supports Redis 2.8.23 and lower. Redis-2.8.6 and higher is a significant step up because a Redis cluster on version 2.8.6 or higher will have Multi-AZ enabled. Upgrading is a non-disruptive process and the cache data is retained.
If you want to persist the cache data, Redis has something called Redis AOF (Append Only File). AOF file is useful in recovery scenarios. In case of a node restart or service crash, Redis will replay the updates from an AOF file, thereby recovering the data lost. But AOF is not useful in the event of a hardware crash and AOF operations are slow.
A better way is to have a replication group with one or more read replicas in different availability zones and enable Multi-AZ instead of using AOF. Because there is no need for AOF in this scenario, ElastiCache disables AOF on Multi-AZ replication groups.
All the nodes in a replication group reside in the same region but in multiple availability zones (AZs). An ElastiCache replication group consists of a primary cluster and up to five read replicas. In the case of a primary cluster or availability zone failure, if your replication group is Multi-AZ enabled, ElastiCache will automatically detect the primary cluster’s failure, select a read replica cluster, and promote it to primary cluster so that you can resume writing to the new primary cluster as soon as promotion is complete.
ElastiCache also propagates the DNS of the promoted replica so that, if your application is writing to the primary endpoint, no endpoint change will be required in your application. Make sure that your cache engine is Redis-2.8.6 or higher and have instance types higher than t1 and t2 nodes.
Redis cluster supports backup and restore processes. It is useful when you want to create a new cluster from existing cluster data.
Amazon ElastiCache offloads the management, monitoring, and operation of caching clusters in the cloud. It has detailed monitoring via Amazon CloudWatch without any extra cost overhead and is a pay-as-you-go service. I encourage you to use ElasticCache for your cloud based web applications requiring split-second response times. I suggest you take a look at a post Michael Sheehy wrote in the spring on Amazon Storage Options because it offers a broad view of the topic in a very short article. If you want to more background in a structured framework, Cloud Academy has a course, Storage on AWS that is well regarded and they offer a 7-day free trial.
Cloud Academy also has a strong series of hands-on labs focused on Amazon Web Services, ElastiCache. These labs are a great way of putting what you just learned to practice immediately. Good luck and let us know what you think. We believe in continuous learning and your feedback is invaluable.
Top 5 AWS Salary Report Findings
At the speed the cloud tech space is developing, it can be hard to keep track of everything that’s happening within the AWS ecosystem. Advances in technology prompt smarter functionality and innovative new products, which in turn give rise to new job roles that have a ripple effect on t...
New on Cloud Academy: Red Hat, Agile, OWASP Labs, Amazon SageMaker Lab, Linux Command Line Lab, SQL, Git Labs, Scrum Master, Azure Architects Lab, and Much More
Happy New Year! We hope you're ready to kick your training in overdrive in 2020 because we have a ton of new content for you. Not only do we have a bunch of new courses, hands-on labs, and lab challenges on AWS, Azure, and Google Cloud, but we also have three new courses on Red Hat, th...
Cloud Academy’s Blog Digest: Azure Best Practices, 6 Reasons You Should Get AWS Certified, Google Cloud Certification Prep, and more
Happy Holidays from Cloud Academy We hope you have a wonderful holiday season filled with family, friends, and plenty of food. Here at Cloud Academy, we are thankful for our amazing customer like you. Since this time of year can be stressful, we’re sharing a few of our latest article...
Google Cloud Platform Certification: Preparation and Prerequisites
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2019, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the second consecuti...
New Lab Challenges: Push Your Skills to the Next Level
Build hands-on experience using real accounts on AWS, Azure, Google Cloud Platform, and more Meaningful cloud skills require more than book knowledge. Hands-on experience is required to translate knowledge into real-world results. We see this time and time again in studies about how pe...
New on Cloud Academy: AWS Solution Architect Lab Challenge, Azure Hands-on Labs, Foundation Certificate in Cyber Security, and Much More
Now that Thanksgiving is over and the craziness of Black Friday has died down, it's now time for the busiest season of the year. Whether you're a last-minute shopper or you already have your shopping done, the holidays bring so much more excitement than any other time of year. Since our...
Understanding Enterprise Cloud Migration
What is enterprise cloud migration? Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale. Why d...
6 Reasons Why You Should Get an AWS Certification This Year
In the past decade, the rise of cloud computing has been undeniable. Businesses of all sizes are moving their infrastructure and applications to the cloud. This is partly because the cloud allows businesses and their employees to access important information from just about anywhere. ...
AWS Regions and Availability Zones: The Simplest Explanation You Will Ever Find Around
The basics of AWS Regions and Availability Zones We’re going to treat this article as a sort of AWS 101 — it’ll be a quick primer on AWS Regions and Availability Zones that will be useful for understanding the basics of how AWS infrastructure is organized. We’ll define each section,...
Application Load Balancer vs. Classic Load Balancer
What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...
Advantages and Disadvantages of Microservices Architecture
What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...
Kubernetes Services: AWS vs. Azure vs. Google Cloud
Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...