In addition to the many services covered on the AWS Certified Cloud Practitioner exam, you should be familiar with concepts and best practices designed to help AWS users succeed with cloud computing, and understand how AWS structures its services across the globe.
This course begins with a lecture covering the different types of AWS global infrastructure, which includes regions, availability zones, edge locations, and regional edge caches. What we’re talking about here is AWS data center hardware, and how it is organized around the world. Understanding how AWS organizes its infrastructure, how AWS infrastructure works, and how to use it to your benefit is essential AWS knowledge.
Next, we discuss the AWS’ Well-Architected Framework, a set of best practices established by experienced AWS solution architects. To be clear - knowledge of how to technically configure well-architected solutions is outside the scope of the AWS Certified Cloud Practitioner exam. However, you should be familiar with the fundamental best practices of cloud architecture, which we will introduce in this course.
Finally, we discuss basic techniques for disaster recovery. There are well-established methods for restoring AWS services, in the unlikely event of an outage. This course will not discuss the step-by-step process of disaster recovery, which is addressed in other courses. This course will provide an overview of each different method, and how each one balances the competing business needs of high availability and cost optimization.
Learning Objectives
- Understand how the different components of AWS global infrastructure work, and can impact AWS cloud solutions
- List and describe the five pillars of the AWS Well-Architected Framework
- Summarize the standard disaster recovery methods, and how a business would select a method based on its service needs
Intended Audience
This course is designed for:
- Anyone preparing for the AWS Certified Cloud Practitioner
- Managers, sales professionals, and other non-technical roles
Prerequisites
Before taking this course, you should have a general understanding of basic cloud computing concepts.
Feedback
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
Hello, and welcome to this final lecture where I'll be highlighting the key points from the previous lectures.
I started this course by talking about the AWS Global Infrastructure. And here we learned about availability zones, regions, edge locations, and regional edge caches. And the key points from these four items were as follows.
Firstly, looking at availability zones. These are physical data centers of AWS. And it's where the actual compute, storage, network, and database resources are hosted. It's likely that multiple data centers located close together form a single availability zone. And each availability zone will have at least one other AZ in the same region. They are linked together by highly resilient, and very low latency private fiber optic connections. And each AZ is isolated from all others. This localized grouping of multiple availability zones is defined as an AWS region. And multiple AZs within a region allows you to create highly available and resilient applications and services.
Next, we moved on to regions, and these are a collection of availability zones that are geographically located close to one another. And regions are deployed all across the globe. Every region will act independently of each other region. And ever region will also have at least two availability zones. By using multiple regions, it helps compliance with regulations, laws, and governance relating to data storage. Utilizing multiple regions creates a high level of availability, and not all AWS services are available in every region.
Following regions, we looked at edge locations and regional edge caches. Edge locations are AWS sites deployed in major cities in highly populated areas. And they are not used to deploy your main infrastructure. Instead, they are used by AWS services, such as AWS CloudFront to cache data and reduce latency for end user access. Regional edge caches sit between your CloudFront origin servers and the edge locations. And they contain a larger cache-width than each of the individual edge locations. And so the data is retained at the regional edge caches longer than the edge locations. And when needed, these edge locations can retrieve cached data from the regional edge cache instead of the origin servers to reduce latency.
Next, we focused on a number of different AWS disaster recovery strategies, starting off by defining what is meant by RTO and RPO.
RTO, or recovery time objective, is the time it takes after a disruption to restore a business process to its service level. And RPO, or recovery point objective, is the acceptable amount of data loss measured in time.
We then looked at the different backup and restore strategies, and these included four distinct methods.
Backup and restore. Data was backed up with an AWS storage service, such as Amazon S3. Data can be imported into AWS using a variety of options such as, Storage Gateway, AWS Snowball, Direct connect, VPN, or the Internet. And in the event of a disaster, archives can be recovered from Amazon S3, and the data can then be restored directly to Cloud resources.
Pilot light. This method incorporates the mirroring of data, such as your database store. And the environment can be scripted as a template using CloudFormation. In the event of a disaster, resources can be scaled up and out as and when needed. Instances can be launched using Amazon Machine Images. And the database can be resized to handle production data as required.
Next was Warm Standby. Again, the mirroring of data is used between your On-Premise data center and the AWS Cloud. Warm Standby is essentially ready to go with all key services running in the most minimal possible way. Essentially, a smaller version of the production envirronment. And in the event of a disaster, the standby environment will be scaled up for production mode, quickly and easily. The DNS records will also be changed to route all traffic to the AWS environment.
Multi-Site. In Multi-Site, the AWS environment is a complete duplicate of your production environment. And in the event of a disaster, traffic is redirected over to the AWS solution by updating the DNS record in Route 53. All traffic and supporting data queries are then supported by your AWS environment. A Multi-Site scenario is usually the preferred one; however, it does come at an increased cost due to the amount of resources required. But it does have the lowest RTO and RPO.
Finally, I addressed the well-architected framework. In this lecture, we learned that it offers a set of guidelines and questions that allow it to constantly follow best practices from a designed, reliability, security, cost effectiveness, and efficiency perspective. And there are five pillars that the framework is built and based upon, each with a number of best practices and design principles. These being operational excellence, security, reliability, performance efficiency, and cost optimization.
The operational excellence pillar is based upon running and monitoring systems to help optimize and deliver value to the business and to aid in supporting, improving, and maintaining your processes and procedures supporting your AWS infrastructure.
The security pillar defines how to manage and secure your infrastructure by protecting your data by focusing on confidentiality, data integrity, access management, and other security controls, while ensuring risk assessment and mitigation is built into your solutions.
The reliability pillar looks at how to maintain stability of your environment and recover from outages and failures in addition to automatically and dynamically meeting resourcing demands put upon your infrastructure.
The performance efficiency pillar is dedicated on ensuring you have the correctly specified resources to efficiently meet the demands of your customers by monitoring performance and adapting your infrastructure as requirements change based on workloads.
And finally, the cost optimization pillar is used to help you reduce your cloud costs by understanding where it's possible to optimize your spend through a variety of means. That has now brought be to the end of this lecture, and to the end of this course.
Hopefully, you now have a greater understanding of the AWS infrastructure, including its components, DR strategies and best practices. If you have any feedback on this course, positive or negative, please do contact us at support@cloudacademy.com. Your feedback is greatly appreciated.
Thank you for your time, and good luck with your continued learning of Cloud computing. Thank you.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.