The course is part of this learning path
Summary - Solution Architect Associate Learning Path
A summary of the content we have covered in the Solution Architect Associate for AWS Learning Path
About the Author
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.
- [Andrew] Hello, and welcome back. Let's summarize what we have learned during this learning path. We started by learning to recognize and explain some of the terminology we are expected to know for the Solutions Architect - Associate exam. Before diving right into compute fundamentals on AWS, standing up our first instance in our EC2 lab, we then dived right into Storage Fundamentals where we were introduced to the various storage options provided by AWS: Amazon S3 object storage, Amazon Elastic Block Store, which is like disc drives, and the Amazon CloudFront Content Delivery Network storage services. We learned to deploy and work with Amazon DynamoDB, Amazon Aurora, Amazon Relational Database Services, and Amazon Redshift, and the Amazon Elastic File System in hands-on labs. Next, we dived in and learned all about AWS Management Services, AWS CloudTrail, AWS Config, AWS Trusted Advisor, AWS CloudWatch, and the Amazon Personal Health Dashboard. We then took a hands-on tour of the AWS Virtual Private Cloud, learning all about CIDR blocks, subnetting, routing, security and how to connect a virtual private network to our VPC. In Domain One, we introduced the concepts of high availability and fault tolerance, and we learned how to recognize and explain how we go about designing highly available fault tolerance solutions on AWS. We introduced and explained the concept of business continuity and the concepts of a recovery point objective and a recovery time objective, and how AWS services can be effective enablers when designing for disaster recovery. We then learned to recognize and explain the core AWS services that when used together, can reduce single points of failure and improve scalability in a multi-tiered solution. In our hands-on labs, we created and worked with auto scaling groups to improve elasticity and durability. We worked with Amazon Simple Queue Service, a core service as it can increase resilience by acting as a messaging service between services and applications, thereby enabling us to decouple layers and reduce dependency on state. Next, we had a hands-on introduction to Amazon CloudWatch, the monitoring service. Amazon CloudWatch is the eyes and ears of your environment, if you like, and it's an important component when designing a resilient architecture. Amazon CloudWatch alarms and triggers can increase resilience by allowing you to react to changes and events. You can automate auto scaling based on predefined CloudWatch performance matrix, for example. In Domain Two, we extended our understanding of how we select and use AWS Services together to create performant and scalable solutions. We learned how to recognize and select the best storage options for on-premise backup and disaster recovery scenarios. We learned how to launch and auto scale group behind an elastic load balancer to increase scalability and elasticity. We learned more about DynamoDB and configuring DynamoDB triggers with AWS Lambda to automate an environment using these managed services. We then practiced processing simple notifications service notifications with the AWS Lambda service, another great combination of AWS Services that can be automated to increase performance and scalability. We learned how to select and combine the AWS CloudTrail Monitoring Service with Amazon CloudWatch to trigger alarms or trigger auto scale events. Okay, let me hand over to Stuart Scott who will remind us of what we learned in Domain Three.
- [Stuart] To recap, requirements of knowledge and understanding of this domain were as follows: Determine how to secure application tiers, determine how to secure data, and define the networking infrastructure for a single VPC application. Throughout this domain, we had a mix of course content and hands-on lab to help you put into practice some of the mechanisms and methods described. I started off by introducing you to the Identity & Access Management service, known as IAM or I-A-M. This service is used to manage identities and their permissions that I was to access AWS resources, and so understanding how this service works and what you can do with it would help you maintain a secure AWS environment. IAM is an important step in ensuring your resources are secure. In this course, we learned how to set up and configure user's groups and roles to control which identities have authorization to access specific AWS resources. We also looked at how to implement multi-factor authentication and how to create and implement IAM policies that allow you to grant and restrict very granular and specific permissions across a range of resources. We looked to how to implement a password policy to align with your internal security controls, and way we'd use identity federation access to control access to your resources. And then finally, we looked at the Key Management Service, or KMS, and how it's used in conjunction with Identity & Access Management. Following this, you had the opportunity to get hands-on with a lab, which guided you through how to create and manage IAM using groups and policies to securely control access to AWS services and resources. I then focused more on the authentication, authorization, and accounting side of things within AWS, which provided an understanding of the different security controls and how they can help you design the correct level of security for your infrastructure. Once an identity has been authenticated and is authorized to perform specific functions, it's then important that this access can be tracked with regards to usage and resource consumption so that it can be audited, accounted, and billed for. In this course, we learned the differences between authentication, authorization, access control, the different authentication mechanisms used by AWS, the different methods of granting authorized access to different AWS resources, how a combination of authentication and authorization mechanisms can be used to create solid security policies, and we also looked at how AWS billing can be used to spot security breaches, and then finally how to track a user within AWS and monitor their actions through audited API code requests. Following this course and again to help solidify some theory you completed another hands-on lab which guided you through some of the best practices when managing roles and groups within Identity & Access Management. The next course guided you through more more security best practices, that's around some of the most common container and abstract services. Understanding these security implications and the responsibility level between you and AWS enables you to adopt and implement the correct level of security within your infrastructure. In this course, you gained an understanding of the difference between both container and abstract services within AWS and how security is managed differently between the two. Also an awareness of how data can be protected at rest and in transit for different services. You gained a comprehension of the importance of network design in increasing the security of abstract or container-based services. And finally the ability to apply the correct level of security to your services depending on their classification, container or abstract, using security features from other AWS services as well as the services built in protection. Following this course, you then learn about the Key Management Service, or KMS, which is a service that allows you to easily encrypt your data with protected keys preventing confidential data from being exposed. The service is fully managed and regionally based, making it highly evaluable with full auditing functions to encrypt your data within your applications. From this course, you learned how to create a customer master key, or CMK, how to encrypt EBS volumes, how to encrypt S3 objects, how to encrypt RDS storage, and audit the use of encryption. Next up was another hands-on lab, this time looking at Amazon CloudWatch and how you can use this service to monitor for specific security-related events. Within this lab, you learned how to use CloudWatch to monitor a log string for specific patterns, in this case invalid SSH attempts, and send a notification via the Simple Notification Service SNS.
- Excellent, thank you Stuart. In Domain Four, we looked at how to design cost-optimized compute services and how to design cost-optimized storage services. So let's review what we covered. First, in optimized compute, we learned about the various purchasing options available to us. On Demand, which is a pay-as-you-go compute, where you purchase at a price set per instance type. Spot instances, where you can bid for compute capacity, which can optimize non-time critical tasks such as number crunching or analysis or big-data tasks, which suit applications that have flexible start and end times. And then reserved instances, where we get a discount rate per instance type by paying an up-front commitment and spreading the cost over one or three years. Reserved instances suit applications with steady or predictable usage. Then there was scheduled instances, which are like reserved instances. However, you can reserve the capacity in advance. Reserved instances and how you combine the instance types together with placement groups and instance type can improve performance and cost. Next, we reviewed consolidated billing and how by consolidating accounts together, we can reduce overall usage costs across an organization. Then we learned all about how to optimize storage. We learned to recognize and apply one storage solution over another and how a combination of storage options, such as Amazon S3, Amazon Snowball, and Amazon CloudFront, and features such as transfer acceleration and randomizing our index keys, could help optimize storage performance. We first learned to recognize and explain the appropriate Amazon S3 storage classes. They're a standard class and frequent access class, and reduced redundancy class and Amazon Glacier. So we learned how to determine which storage class would best suit any given storage use case. Standard storage provides the highest level of durability and availability and so suits most work loads. Infrequent access storage class provides lower levels of availability at a lower price, and so has a minimum file size and storage duration threshold. Reduced redundancy storage offers lower durability at a very low price, thus suiting storage of copies of older archives. Now, Amazon Glacier is our cold store, designed for storing backups and tape archives, anything where recovery time is not a constraint. Now keep in mind all Amazon S3 storage classes support encryption. We then learned to recognize and explain ways we could optimize the speed of accessing data stored in Amazon S3 object stores. Transfer acceleration improves transfer of files over the public domain, and so it can speed up regular transfer objects between regions or when transferring files to a centralized bucket. Forget heavy work loads. Amazon CloudFront can improve latency in reduced load on your Amazon S3 buckets. For mixed request, e.g. get, put or list bucket made over a hundred requests per second, creating a hexadecimal hash key and adding it to your bucket or object name can introduce randomness in your key name prefixes. The key names, and therefore the I/O load, are then distributed across more than one partition within the region by Amazon S3. Finally, in Domain Five, we designed a highly available solution on AWS using AWS services. We added order scaling, elastic load balancing, Amazon CloudWatch monitoring, and the Amazon Aurora database to enable operational excellence in our design. We then learned to recognize and explain some of the AWS deployment services available by completing two hands-on labs. First, we deployed a highly available solution using AWS CloudFormation. Then we ran a controlled deployment using AWS Elastic Beanstalk. So that brings this section of the Solution Architect -Associate learning path to a close. However, we still have more fantastic preparation tools to get you ready for your exam. First off, we have a series of review cards in the following chapter. Now these cards summarize what we have covered and provide you with a quick and effective way to recognize and remember the key concepts, services, and design principles for the exam. We also have a preparation exam which gives you an opportunity to identify any areas you may need to improve on before sitting the certification exam. Okay, ninjas, you are ready to go and sit this exam and ace it. We are here to help, so please feel free to reach out to us if you need any help or guidance in getting ready for this certification exam. And please come back and let us know how you went. We love hearing how you aced the exam.