Caching is a technique to store frequently accessed information in a temporary memory location on a server. Read-intensive web applications are the best use-case candidates for a cache service.
In a web driven world, catering to users’ requests in real time is the goal of every website. Because performance and speed are required, a caching layer, like Amazon ElastiCache, is the first tool that every website employs in serving mostly static and frequently accessed data. So what benefits does a caching layer provide a user?
Caching is a technique to store frequently accessed information, html pages, images, and other static information in a temporary memory location on the server. Read-intensive web applications are the best use-case candidates for a cache service. There are a number of caching servers used across applications, the most notable are Memecached, Redis, and Varnish. I suggest you have a look at a previous post about Amazon storage options because it relates to this post.
There are various ways to implement caching using those technologies. However, with such large number of organizations moving their infrastructure to cloud, many cloud vendors are also providing caching as a service. Amazon ElastiCache is one such popular web caching service which provides users with Memcached or Redis-based caching that supports installation, configuration, HA, Caching failover and clustering. Let’s talk a bit about Memcached and Redis before exploring the AWS versions.
Memcached is an open source, distributed, in-memory key-value store-caching system for small arbitrary data streams flowing from database calls, API calls, or page rendering. Memcached has long been the first choice of caching technology for users and developers around the world.
Redis is a newer technology and often considered as a superset of Memcached. That means Redis offers more and performs better than Memcached. Redis scores over Memcached in few areas that we will discuss briefly.
- Redis implements six fine-grained policies for purging old data, while Memcached uses the LRU (Least Recently Used) algorithm.
- Redis supports key names and values up to 512 MB, whereas Memcached supports only 1 MB.
- Redis uses a hashmap to store objects whereas Memcached uses serialized strings.
- Redis provides a persistence layer and supports complex types like hashes, lists (ordered collections, meant for queue), sets (unordered collections of non-repeating values), or sorted sets (ordered/ranked collections of non-repeating values).
- Redis is used for built-in pub/sub, transactions (with optimistic locking), and Lua scripting.
- Redis 3.0 supports clustering.
Amazon ElastiCache is a Caching-as-a-Service from Amazon Web Services. AWS simplifies setting up, managing, and scaling a distributed in-memory cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution. AWS removes the complexity associated with deploying and managing a distributed cache environment.
From the Amazon ElastiCache documentation, we learn that ElastiCache has features to enhance reliability for critical production deployments, including:
- Automatic detection and recovery from cache node failures.
- Automatic failover (Multi-AZ) of a failed primary cluster to a read replica in Redis replication groups.
- Flexible Availability Zone placement of nodes and clusters.
- Integration with other Amazon Web Services such as Amazon EC2, CloudWatch, CloudTrail, and Amazon SNS, to provide a secure, high-performance, managed in-memory caching solution.
Amazon ElastiCache provides two caching engines, Memcached and Redis. You can move your existing Memcached or Redis caching implementation to Amazon ElastiCache effortlessly. Simply change the Memcached/Redis endpoints in your application.
Before implementing Amazon ElastiCache, let’s get familiar with a few related terms:
Nodes are the smallest building block of Amazon ElastiCache service, which are typically network-attached RAMs (each having an independent DNS name and port).
Clusters are logical collections of nodes. If your ElastiCache cluster is of Memcached nodes, you can have nodes in multiple availability zones (AZs) to implement high-availability. In the case of a Redis cluster, the cluster is always a single node. You can have multiple replication groups across AZs.
A Memcached cluster has multiple nodes whose cached data is horizontally partitioned across each node. Each of the nodes in the cluster is capable of read and write.
A Redis cluster, on the other hand, has only one node, which is the master node. Redis clusters do not support data partitioning. Rather, there are up to five replication nodes in-replication groups which are read-only. They maintain copies from the master node which is the only writeable node. The data are copied asynchronously to the read-replicas, and read requests are spread across read-replica clusters. Having multiple read replicas in different AZs enables high availability.
Until now we have discussed both the caching engines, but I may seem biased towards Redis. So the question is, if Redis is all powerful, then why doesn’t ElastiCache provide only Redis? There are few good reasons for using Memcached:
- It is the simplest caching model.
- It is helpful for people needing to run large nodes with multiple cores or threads.
- It offers the ability to scale out/in, adding and removing nodes on-demand.
- It handles partitioning data across multiple shards.
- It handles cache objects, such as a database.
- It may be necessary to support an existing Memcached cluster.
Each node in a Memcached cluster has its own endpoint. The cluster also has an endpoint called the configuration endpoint. If you enable Auto Discovery and connect to the configuration endpoint, your application will automatically know each node endpoint – even after adding or removing nodes from the cluster. The latest version of Memcached supported in ElastiCache is 1.4.24.
In a Memcached-based ElastiCache cluster, there can be a maximum of 20 nodes where data is horizontally partitioned. If you require more, you’ll have to request a limit increase via the ElastiCache Limit Increase Request form.
Apart from that, you can upgrade the Memcached engine. Keep in mind that the Memcached engine upgrade process is disruptive. The cached data is lost in any existing cluster when you upgrade.
Changing the number of nodes in a cluster is only possible for a Memcached-based ElastiCache cluster. However, this operation requires careful design of the hashing technique you will use to map the keys across the nodes. One of the best techniques is to use a consistent hashing algorithm for keys.
Consistent hashing uses an algorithm such that whenever a node is added or removed from a cluster, the number of keys that must be moved is roughly 1 / n (where n is the new number of nodes). Scaling from 1 to 2 nodes results in 1/2 (50 percent) of the keys being moved — the worst case. Scaling from 9 to 10 nodes results in 1/10 (10 percent) of the keys being moved. An unsuitable algorithm will result in heavy cache misses, thus increasing the load on a database and defeating the purpose of a caching layer.
We have discussed Redis and the replication groups earlier. All things considered, Redis will normally be the better choice:
- Redis supports complex data types, such as strings, hashes, lists, and sets.
- Redis sorts or ranks in-memory data-sets.
- Redis provides persistence for your key store.
- Redis replicates the cache data from the primary to one or more read replicas for read intensive applications.
- Redis has automatic fail-over capabilities if the primary node fails.
- Redis has publish and subscribe (pub/sub) capabilities where the client is informed of events on the server.
- Redis has back-up and restore capabilities.
Currently, Amazon ElastiCache supports Redis 2.8.23 and lower. Redis-2.8.6 and higher is a significant step up because a Redis cluster on version 2.8.6 or higher will have Multi-AZ enabled. Upgrading is a non-disruptive process and the cache data is retained.
If you want to persist the cache data, Redis has something called Redis AOF (Append Only File). AOF file is useful in recovery scenarios. In case of a node restart or service crash, Redis will replay the updates from an AOF file, thereby recovering the data lost. But AOF is not useful in the event of a hardware crash and AOF operations are slow.
A better way is to have a replication group with one or more read replicas in different availability zones and enable Multi-AZ instead of using AOF. Because there is no need for AOF in this scenario, ElastiCache disables AOF on Multi-AZ replication groups.
All the nodes in a replication group reside in the same region but in multiple availability zones (AZs). An ElastiCache replication group consists of a primary cluster and up to five read replicas. In the case of a primary cluster or availability zone failure, if your replication group is Multi-AZ enabled, ElastiCache will automatically detect the primary cluster’s failure, select a read replica cluster, and promote it to primary cluster so that you can resume writing to the new primary cluster as soon as promotion is complete.
ElastiCache also propagates the DNS of the promoted replica so that, if your application is writing to the primary endpoint, no endpoint change will be required in your application. Make sure that your cache engine is Redis-2.8.6 or higher and have instance types higher than t1 and t2 nodes.
Redis cluster supports backup and restore processes. It is useful when you want to create a new cluster from existing cluster data.
Amazon ElastiCache offloads the management, monitoring, and operation of caching clusters in the cloud. It has detailed monitoring via Amazon CloudWatch without any extra cost overhead and is a pay-as-you-go service. I encourage you to use ElasticCache for your cloud based web applications requiring split-second response times. I suggest you take a look at a post Michael Sheehy wrote in the spring on Amazon Storage Options because it offers a broad view of the topic in a very short article. If you want to more background in a structured framework, Cloud Academy has a course, Storage on AWS that is well regarded and they offer a 7-day free trial.
Cloud Academy also has a strong series of hands-on labs focused on Amazon Web Services, ElastiCache. These labs are a great way of putting what you just learned to practice immediately. Good luck and let us know what you think. We believe in continuous learning and your feedback is invaluable.
AWS Security: Bastion Host, NAT instances and VPC Peering
Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...
Top 13 Amazon Virtual Private Cloud (VPC) Best Practices
Amazon Virtual Private Cloud (VPC) brings a host of advantages to the table, including static private IP addresses, Elastic Network Interfaces, secure bastion host setup, DHCP options, Advanced Network Access Control, predictable internal IP ranges, VPN connectivity, movement of interna...
Big Changes to the AWS Certification Exams
With AWS re:Invent 2019 just around the corner, we can expect some early announcements to trickle through with upcoming features and services. However, AWS has just announced some big changes to their certification exams. So what’s changing and what’s new? There is a brand NEW ...
New on Cloud Academy: ITIL® 4, Microsoft 365 Tenant, Jenkins, TOGAF® 9.1, and more
At Cloud Academy, we're always striving to make improvements to our training platform. Based on your feedback, we released some new features to help make it easier for you to continue studying. These new features allow you to: Remove content from “Continue Studying” section Disc...
AWS Security Groups: Instance Level Security
Instance security requires that you fully understand AWS security groups, along with patching responsibility, key pairs, and various tenancy options. As a precursor to this post, you should have a thorough understanding of the AWS Shared Responsibility Model before moving onto discussi...
Cloud Migration Risks & Benefits
If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...
Real-Time Application Monitoring with Amazon Kinesis
Amazon Kinesis is a real-time data streaming service that makes it easy to collect, process, and analyze data so you can get quick insights and react as fast as possible to new information. With Amazon Kinesis you can ingest real-time data such as application logs, website clickstre...
Google Cloud Functions vs. AWS Lambda: The Fight for Serverless Cloud Domination
Serverless computing: What is it and why is it important? A quick background The general concept of serverless computing was introduced to the market by Amazon Web Services (AWS) around 2014 with the release of AWS Lambda. As we know, cloud computing has made it possible for users to ...
Google Vision vs. Amazon Rekognition: A Vendor-Neutral Comparison
Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs. This post is a fact-based comparative analysis on Google Vision vs. Amazon Rekognition and will focus on the tech...
New on Cloud Academy: CISSP, AWS, Azure, & DevOps Labs, Python for Beginners, and more…
As Hurricane Dorian intensifies, it looks like Floridians across the entire state might have to hunker down for another big one. If you've gone through a hurricane, you know that preparing for one is no joke. You'll need a survival kit with plenty of water, flashlights, batteries, and n...
Amazon Route 53: Why You Should Consider DNS Migration
What Amazon Route 53 brings to the DNS table Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service offered by AWS. It is named by the TCP or UDP port 53, which is where DNS server requests are addressed. Like any DNS service, Route 53 handles domain regist...
How to Unlock Complimentary Access to Cloud Academy
Are you looking to get trained or certified on AWS, Azure, Google Cloud Platform, DevOps, Cloud Security, Python, Java, or another technical skill? Then you'll want to mark your calendars for August 23, 2019. Starting Friday at 12:00 a.m. PDT (3:00 a.m. EDT), Cloud Academy is offering c...