1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Solution Architect Professional for AWS- Domain Three - Deployment Management

Implementing The Right Architecture

Start course
Overview
DifficultyAdvanced
Duration1h 44m
Students1243

Description

Course Description

In this course, you'll gain a solid understanding of the key concepts for Domain Three of the AWS Solutions Architect Professional certification: Deployment Management.

Course Objectives

By the end of this course, you'll have the tools and knowledge you need to successfully accomplish the following requirements for this domain, including:

  • Ability to manage the lifecycle of an application on AWS.
  • Demonstrate ability to implement the right architecture for environments.
  • Position and select most appropriate AWS deployment mechanism based on scenario.
  • Demonstrate the ability to design a loosely coupled system.

Intended Audience

This course is intended for students seeking to acquire the AWS Solutions Architect Professional certification. It is necessary to have acquired the Associate level of this certification. You should also have at least two years of real-world experience developing AWS architectures.

Prerequisites

As stated previously, you will need to have completed the AWS Solutions Architect Associate certification, and we recommend reviewing the relevant learning path in order to be well-prepared for the material in this one.

This Course Includes

  • Expert-led instruction and exploration of important concepts.
  • 50 minutes of high-definition video. 
  • Complete coverage of critical Domain Three concepts for the AWS Solutions Architect - Professional certification exam.

What You Will Learn

  • Planning a Deployment
  • Meeting Requirements
  • implementing-the-right-architecture
  • Selecting the Appropriate AWS Deployment
  • Project Review
  • AWS Deployment Services

Transcript

The second phase of development requires that we build a website for our customers to manage their accounts and print jobs. We can leverage the solution we built in phase one to do the bulk of the work, so let's get started. In building the web application, we can use almost any language we want. The language we use will need to have an AWS STK available for it. AWS already supports many programming languages, such as Java, C Sharp, Node JS, and additionally, our choice will be a stack that can run under Linux, since the cost of running an AC2 instance with Linux is less than the other operating systems as of today. So per our constraints, the lower cost is a must. From the web application, we will allow customers to create a new account if they have never used our service before. Interaction like this usually involves sending a confirmation email to verify the customer is who they claim to be. For this, we can use Amazon Simple Email Service. Amazon Simple Email Service requires that we validate our domain first. We need to move our domain over to Amazon Route 53, which is very easy to do with a few steps split across AWS and our domain register if it's not already Amazon AWS. Once done, we go through the validation process by adding text records and the value provided in the verification step. After validation, we can begin to send emails out of our web application to our users. Now one of our goals when running a web application is to offload unnecessary traffic whenever possible. With AWS, we can move static resources to Amazon S3. This lets the web client pull images and style sheets and JavaScript files more directly from S3 without hitting the web server for each of those requests. So we will create an Amazon S3 bucket specifically for these resources, and make getting access from the Amazon S3 bucket open to the public. Now, could we take this a step further and implement this through CloudFront? Yeah, of course we could. However, we are developing for a local 3D printing company, and the benefits of Amazon CloudFront might not outweigh the costs in this instance. So for now, we'll just use Amazon S3. In the future, we have the option for Amazon CloudFront with little impact since Amazon S3 buckets can serve as an Amazon CloudFront origin. When accessing AWS resources from our web application, we need to have the right policies in place. There are two ways we can proceed. We can create and use an AWS access key ID and secret access key associated with an account. Or we can associate our Amazon EC2 instance with a role. The latter option is preferred in our situation, and just about every situation, because we don't have to store any access keys in that way. Imagine a situation where our coding configuration manages to get into the wrong hands. If we use access keys, those with the code will be able to run AWS commands under our account. Now this could be detrimental, especially if we did not properly lock down the policy for which the access keys belong. With Amazon EC2 roles, no policy keys are stored. This limits our exposure if the code and configuration leak out. So in phase one of our solution, we created an Amazon EC2 instance that runs a CRON job that checks for emails. This won't go away just because we are implementing a web application. However, we want to keep costs down. We have several solutions we can choose from to meet our requirements. For starters, we could implement a solution using Amazon Elastic Container Services, or ECS. Another option could be to utilize a second EC2 instance. A simpler solution for our situation could be to use just one EC2 instance. This is where we have to consider the tradeoffs based on the constraints that we've been given. Using Amazon ECS separates containers running a CRON job from the containers running our web application, and keeps us running under a single instance. We may need to upgrade to a larger instance, which is going to increase our costs, but we could move to a reserved instance, and that would lessen the increase. The latter two options that we discussed violate at least one constraint. A second Amazon EC2 instance increases the cost associated with our solution. Running the CRON job in the web application on a single instance violates a security-based practice since the CRON job now has access to more than it needs, and the web application can execute a lambda function it doesn't need to execute. For our situation, we would implement Amazon ECS. Our Amazon ECS cluster can run across multiple availability zones if we desire. We would run a CRON container, and another container would run our web application. Each container runs under its own role specifically designed to access only the resource it needs. Both containers are scheduled to run indefinitely. We only need one CRON container, and assuming we design our session properly, we could run multiple containers for our web application. After rolling our Amazon ECS, we need to look at how to securely serve up our web application. We could conceivably create a security group to serve up the web application directly to our container and point a DNS entry to it. This breaks down in the event we have a failover to another Amazon EC2 instance in another AZ. It also breaks down if we have more than one container. We would have to manually point the DNS entry to the new containers. We could go through the hassle of scripting changes to Route 53 on startup, but what happens if later on, growth of the company requires that we run more than one container? The changes made by the startup scripts would conflict with one another, resulting in a single container doing all the work despite both running. There's a much easier way that also conforms to our base practices. We will use an Elastic Load Balancer. Upon startup, our web application container will register itself with the ELB. In Route 53, we will create an entry that points out a domain to the load balancer. One of the benefits of using Route 53 is that we can easily set up a failover entry in the event our web application becomes unavailable. To do this, we create an S3 bucket with our domain name and add our static pages. Our static page will include instructions on how the customer can email us their print jobs. This way, customers can get their print jobs executed at all times. Recall our constraints for this application. First, we have to meet high availability. This phase uses Amazon ECS, which schedules the placement of containers to keep our site available. In the event our web application becomes unavailable, Route 53 will failover to our static website hosted on S3. Second, we keep our application secure with restrictions that limit access to the server, containers, and S3 buckets. Each container runs under its own role limited to the resources it needs. And third, we keep the cost down by making the most out of our EC2 instance by using Amazon ECS. Also, we decided against Amazon CloudFront since we are not yet growing outside of our local area, while having a design that allows us to switch to using Amazon CloudFront in the future with limited impact. With that, we have finished phase two. So next, we'll look at implementing a mobile solution to round out our product.

About the Author

Students59760
Courses93
Learning paths38

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.