CloudAcademy
  1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Implementation and Deployment for Solutions Architect Associate on AWS

Implementation

Contents

keyboard_tab
Implementation and Deployment
play-arrow
Start course
Overview
DifficultyIntermediate
Duration37m
Students3158

Description

Course Description

In this course we apply the design principles, components, and services we learned in the previous courses in this learning path to design and build a highly available, scalable application. We then apply our optimisation principles to identify ways to increase durability and cost efficiency to ensure the best possible solution for the end customer.

Course Objective

  • Identify the appropriate techniques and methods using AWS services to implement a highly available, cost efficient, fault tolerant cloud solution.

Intended Audience

This course is for anyone preparing for the Solutions Architect–Associate for AWS certification exam. We assume you have some existing knowledge and familiarity with AWS, and are specifically looking to get ready to take the certification exam.

Pre-Requisites

Basic knowledge of core AWS functionality. If you haven't already completed it, we recommend our Fundamentals of AWS Learning Path. We also assume you have completed all the other courses, labs, and quizzes that proceed this course in the Solutions Architect–Associate on AWS learning path.

This Course Includes:

  • 4 Video Lectures
  • Real-world application of concepts covered in the exam

What You'll Learn

Lecture Group What you'll learn
 Solution Design How to apply what you've learned about designing solutions to a real-world scenario
 Solution Architecture Architecting a solution in the real world
 Implementation Implementing on a solution you've designed and architected for the real world
 Optimizing for High Availability Optimizing your real-world solution for high availability

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Okay, welcome back! Let's get into our implementation. I've created a user called s3_user, with full access to S3, and some access keys, so I can access S3 from my local computer. I also created an IAM role, with full permissions for S3 and CloudWatch. So let's start by creating the database. I will create a MySQL database instance with Multi-AZ, to store our data. Now we'll need to configure the information about our database. Let's provision a simple t.micro instance with five gigabytes of storage. We'll identify this DB and create the credentials for it. We can then select the default Subnet group, and assign a Security Group for this database instance. We'll come back to Security Groups, when we need to harden our environment. For now, we're gonna start with a wide open policy. The database name will be wordpress. We will stick with the default settings for the rest of the fields. Now we are ready to upload our files. I will use the Sync command to synchronize the files from this folder to the S3 bucket that we created. Sync checks each file and copies the most recent versions from either side of the connection. It's a quick way to update files between S3 and any local storage. So the files have been copied okay to the S3 bucket. Now we can configure our Security Groups to manage access between our resources. First, we need a Security Group to be associated with the web servers. Then, a Security Group to be associated with the Elastic Load Balancer, with the rule for the Elastic Load Balancer to accept HTTP requests from anywhere. HTTP requests will be on Port 80. So let's create one rule allowing access from the Elastic Load Balancer to the web servers. And another rule, allowing access from the web servers to the database instances. Now let's create some instances. We'll use the Amazon Linux and t2.micro instances. This is where we can specify the Subnet, into which we want to launch the instances. And the role we will use, to limit what the instance can do in our VPC. We can also bootstrap the machine by adding some user data. Now, user data is a script that runs once during the installation launch. It's really useful for bootstrapping instances, and we can add this script to our Auto Scale launch config, so every new machine is provisioned with the resources that we want. So let's just pause and walk through what we're doing here. This script will download the wordpress app bundle from S3, and deploy it onto our new instance. The script will also configure the instance to monitor, through CloudWatch, and to enable us to report on some custom EC2 metrics. Okay, so let's launch our instance. So let's name this instance wp-web1, and select the security group that we previously created. AWS will present an error, as it should, as we have not allowed SSH access to the instance. Let's launch this instance and two more like it. Now we can create our Elastic Load Balancer. We'll name it wp-elb, and configure it to run on the three Subnets available through our default VPC. Let's select the Security Group that we created especially for our Load Balancer, and configure health checks to check the installation path of wordpress. Let's select the three web instances; their status will change to In Service, when that's complete. Okay, we are close to having our proposed design ready to test. Let's see what it looks like when we test out the Load Balancer URL. Great, it works! So at this point we have our wordpress application running on three instances in three separate availability zones, behind our Elastic Load Balancer. The Elastic Load Balancer will health-check each instance on the port we specified, and then it will round-robin requests of traffic to those instances, returning an OK status in the health check. Our MySQL database is running with its master in one availability zone and two failover databases in alternate availability zones. We now want to configure CloudFront as another additional layer of durability and security. Let's start by creating a new web distribution. This time, we'll use the Load Balancer as our origin. I will make a very generic configuration allowing all HTTP methods, and using all edge locations as our price class, which suits as our customer has a global audience. It will take some time to deploy, which can happen in the background. We need to add two elements, controlling the login and the admin pages. Both of them will use the same settings. We only need to specify the right path pattern. Static and dynamic content will now be delivered by our CloudFront distribution, meaning we have another layer of durability and security in our application design. CloudFront is forwarding, presently, most of the content from the Load Balancer, so we can change that by changing our site URL. That will be all we need to do on the wordpress site. So, at this point, we have a CloudFront distribution handling our traffic requests from global locations. We've got our wordpress service sitting behind an Elastic Load Balancer. Now let's go into some optimization, and see what we can add to improve the durability of this solution, in the next video.

About the Author

Students50775
Courses76
Learning paths28

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.