Integrating Redis Cache and CDN on Azure
The course is part of these learning paths
This course provides an overview of Redis Cache and how to create a Redis Cache instance in Azure. With Redis Cache deployed in Azure, we’ll then connect an application to the cache.
Next, we’ll walk through the process of storing and retrieving data in Redis Cache. After covering Redis Cache, we’ll walk through an overview of what CDN is and what it’s used for. We’ll then develop some code for leveraging CDN. As we wrap up the course, we’ll cover the process for invalidating data in both Redis Cache and in a CDN.
This course is intended for IT professionals who are interested in earning Azure certification and those who need to incorporate Redis Cache or CDN into their solutions. To get the most from this course, you should have at least a moderate understanding of what caching is and why it’s used.
By the end of this course, you should have a good understanding of what Redis Cache and CDN are and what purposes they serve. You’ll also know how to connect to each from applications and how to purge or invalidate data in both.
- [Narrator] On the screen here, you can see a diagram of exactly how CDN works. Let's put a little context behind it. In step one, User A requests a file or an asset via a URL with a special domain name. Such a domain name might be mydomain.azureedge.net. The domain name can actually be an endpoint hostname or even a custom domain. DNS then routes that request to the best performing point-of-presence, or PoP, which is often the point-of-presence that is geographically closest to the user requesting the content.
Now, if there are no edge servers in the point-of-presence that have the requested file in their cache, the POP, or point-of-presence, requests the file from the originating server. The originating server could be an Azure Web App, an Azure Cloud Service, an Azure Storage Account, or essentially any other publicly accessible web server. Next, the originating server returns the requested file to one of the edge servers in the PoP. The edge server in the PoP then caches the file and returns it to the original requester, which is User A in this case. The file will then remain cached on that edge server in the PoP until the time to live, or TTL, that's specified by its HTTP headers, expires. The default TTL is seven days, unless the origin server provides a specific TTL. Other users can then request the same file by using the same URL that User A used, and can also be directed to the same PoP. If the TTL for the file hasn't expired, the PoP edge server returns the file directly from the cache instead of going back to the originating server. What this does is result in a far faster, more responsive end user experience.
About the Author
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.