In addition to the many services covered on the AWS Certified Cloud Practitioner exam, you should be familiar with concepts and best practices designed to help AWS users succeed with cloud computing, and understand how AWS structures its services across the globe.
This course begins with a lecture covering the different types of AWS global infrastructure, which includes regions, availability zones, edge locations, and regional edge caches. What we’re talking about here is AWS data center hardware, and how it is organized around the world. Understanding how AWS organizes its infrastructure, how AWS infrastructure works, and how to use it to your benefit is essential AWS knowledge.
Next, we discuss the AWS’ Well-Architected Framework, a set of best practices established by experienced AWS solution architects. To be clear - knowledge of how to technically configure well-architected solutions is outside the scope of the AWS Certified Cloud Practitioner exam. However, you should be familiar with the fundamental best practices of cloud architecture, which we will introduce in this course.
Finally, we discuss basic techniques for disaster recovery. There are well-established methods for restoring AWS services, in the unlikely event of an outage. This course will not discuss the step-by-step process of disaster recovery, which is addressed in other courses. This course will provide an overview of each different method, and how each one balances the competing business needs of high availability and cost optimization.
- Understand how the different components of AWS global infrastructure work, and can impact AWS cloud solutions
- List and describe the five pillars of the AWS Well-Architected Framework
- Summarize the standard disaster recovery methods, and how a business would select a method based on its service needs
This course is designed for:
- Anyone preparing for the AWS Certified Cloud Practitioner
- Managers, sales professionals, and other non-technical roles
Before taking this course, you should have a general understanding of basic cloud computing concepts.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
Hello and welcome to this lecture covering the AWS Global Infrastructure.
Amazon Web Services is a global public cloud provider, and as such, it has to have a global network of infrastructure to run and manage it's growing cloud services that support customers around the world. This global network is comprised of a number of different key components. These being: availability zones, regions, edge locations, and regional edge caches.
If you're deploying services on AWS, you want to have a clear understanding of each of these components, how they are linked, and how you can use them within your solution to your maximum benefit. Let's take a closer look, starting with availability zones.
Availability zones and regions are closely related. These availability zones, commonly referred as AZ's, are essentially the physical data centers of AWS. This is where the actual compute, storage, network, and database resources are hosted that we as consumers provision within our virtual private clouds, our VPC's.
A common misconception is that a single availability zone is equal to a single data center. This is not the case. In fact, it's likely that multiple data centers located close together form a single availability zone. Each availability zone will always have at least one other availability zone that is geographically located within the same area, usually a city, which are linked by highly resilient and very low latency private fiber optic connections. However, each AZ is isolated from the others using separate power and network connectivity that minimizes impact to other AZ's should a single AZ fail. These low latency links between AZ's are used by many AWS services to replicate data for high availability and resiliency purposes.
For example, when RDS, the relational database service, is configured for multi AZ deployments, AWS would use synchronous replication between it's primary and secondary database and asynchronous replication for any read replicas that have been created.
Often, there are three or four or even five AZ's linked together via these low latency connections. This localized geographical grouping of multiple AZ's, which could include multiple data centers, is defined as an AWS region. Multiple AZ's within a region allows you to create highly available and resilient applications and services. But architecting your solutions to utilize resources across more than one AZ ensures that minimal or no impact will occur to your infrastructure should an AZ experience a failure.
Anyone can deploy resources in a cloud, but architecting them in a way that ensures your infrastructure remains stable, available, and resilient when faced with a disaster, is a different matter. Making use of at least two AZ's in a region helps you maintain high availability of your infrastructure and it's always a recommended best practice.
Regions. As we now know, a region is a collection of availability zones that are geographically located close to one another. This is generally indicated by AZ's within the same city. AWS has deployed them across the globe to allow it's world wide customer base to take advantage of low latency connectivity. Every region will act independently of the others and each will contain at least two availability zones. For example, if an organization based in London was serving customers throughout Europe, there would be no logical sense to deploy services in the Sydney region, simply due to the latency response times for it's customers. Instead, the company would select the region most appropriate for them and their customer base which may be the London, Frankfurt, or Ireland region.
Having global regions also allows for compliance with regulation laws and governance relating to data storage when at rest and in transit. For example, you may be required to keep all data within a specified location, such as Europe. Having multiple regions within this location allows an organization to meet this requirement. Similarly to how utilizing multiple AZ's within a region creates a level of high availability, the same can be applied to utilizing multiple regions. Depending on the level of business continuity you require, you may choose to architect your AWS environment to support your applications and services across multiple regions should an entire region become unavailable, perhaps due to a natural disaster.
You may want to use multiple regions if you are a global organization serving customers in different countries that have specific laws and governance about the use of data. In this case, you could even connect different VPC's together in different regions. The number of regions is increasing year after year, as AWS works to keep up with the demand for cloud computing services. Interestingly, not all AWS services are available in every region. This is a consideration that must be taken into account when architecting your infrastructure. Some services are classed as global services, such as AWS identity and access management, or Amazon CloudFront, which means that these services are not tied to a specific region. However, most services are region specific, and it's down to you to understand which services are available within which region. The link on the screen provides a definitive list of all services and the regions where they operate. This list is constantly being updated as more and more services become available in different regions.
AWS has a specific naming convention for both regions and availability zones. Depending on where you are viewing and using the region name, it can be represented as two different names for the same region. Regions have both a friendly name, indicating a location that can be viewed within the management console, and a code name that is used when referencing regions programmatically, for example, when using the AWS CLI. As you can see in this example, the name in the first column is easier to associate than that of the code name.
Availability zones are always referenced by their code name, which is defined by the AZ's region code name that the AZ belongs to, followed by a letter. For example, the AZ's within the eu-west-1 region, which is EU Ireland, are eu-west-1a, eu-west-1b, and eu-west-1c.
Edge locations are AWS sites deployed in major cities and highly populated areas across the globe. And they far outnumber the number of availability zones themselves. While edge locations are not used to deploy your main infrastructure, such as EC2 instances, EBS storage, VPC's or RDS resources, like within AZ's, they are used by AWS services such as AWS CloudFront to cache data and reduce latency for end user access by using the edge locations as a global content delivery network, a CDN. As a result, edge locations are primarily used by end users who are accessing and using your services.
For example, you may have your website hosted on EC2 instances and S3 as your origin within the Ohio region associated to a CloudFront distribution. When a user accesses your website from Europe, they would be redirected to their closest edge location within Europe where cache data could be read on your website, significantly reducing latency. To understand more about how Amazon CloudFront achieves this, you can take a look at our existing courses and labs on this service: Working with Amazon CloudFront, how to Serve your files using the CloudFront CDN, and how to Configure a Static Website with S3 And CloudFront.
In November 2016, AWS announced a new type of edge location called a regional edge cache. These sit between your CloudFront origin service and the edge locations. A regional edge cache has a larger cache width than that of the individual edge locations. And because data expires from the cache at the edge locations, the data is retained at the regional edge cache. Therefore, when data is requested at the edge location that is no longer available, the edge location can retrieve the cache data from the regional edge cache instead of the origins servers which would have a high latency. Understanding what each of these components can allow you to do will help you architect a resilient, highly available, secure, and low latency solution for you and your customers.
That has now brought me to the end of this lecture. Coming up next is the topic of AWS disaster recovery strategies.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.