Industry standards
Start course
Difficulty
Advanced
Duration
1h 5m
Students
2442
Ratings
5/5
starstarstarstarstar
Description

Please note: this course has now been removed from our content library. If you want to study for the SysOps Administrator Associate Level Certification, we recommend you take our dedicated learning path for that certification that you can find here.

 

The AWS Certified SysOps Administrator (associate) certification requires its candidates to be comfortable deploying and managing full production operations on AWS. The certification demands familiarity with the whole range of Amazon cloud services, and the ability to choose from among them the most appropriate and cost-effective combination that best fits a given project.

In this exclusive Cloud Academy course, IT Solutions Specialist Eric Magalhães will guide you through an imaginary but realistic scenario that closely reflects many real-world pressures and demands. You'll learn to leverage Amazon's elasticity to effectively and reliably respond to a quickly changing business environment.

The SysOps certification is built on solid competence in pretty much all AWS services. Therefore, before attempting the SysOps exam, you should make sure that, besides this course, you have also worked through the material covered by our three AWS Solutions Architect Associate level courses.

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Hello and welcome to our eighth lecture. Today we will bring our environment up to industry standards, which is a must for any portal that Cloud Motors hosts. For that, we need to plan a definitive deployment, which involves high availability among other things. To be in compliance, we need to ensure that our portal will be available all the time, even during an update. For that, we will use two availability zones and change our RDS instance to a multi AZ configuration.

We also need to log every access request. Every access should be logged and saved. The main point here is security. During our first setup, we didn't worry about it, because the portal was only a joke. Now we need to close all the gaps that we have in our environment. This is how our infrastructure will look. We will create two new subnets to separate RDS from the web servers, and also enable the multi AZ feature. We need to ensure that nobody will access the instances directly, and this means allowing only connections coming from the ELB to the instances, and to also close off SSH access. Our users are going to access the portal through a route 53 hosted zone, to generate a friendly name for them. First, we need to configure our VCP so it looks like the one we planned. For that, I will create two new subnets for RDS on two different AZs.

So far, we have only the default routing table. So we'll need to configure a new routing table for the new private subnets as well. But that, you can do by yourself.

What I want to show you is how to create a route 53 hosted zone inside your VPC, because I don't have a public DNS to play with. First, we need to create a DHCP option set with the information about the zone that we're going to use. Paste this information in the DNS servers or the DNS address provided by AWS in the hosted zone, in case you created the zone before configuring the VPC. Be sure that you have DNS host names and DNS resolution enabled on your VPC, and change the DHCP option set. Now, on route 53, we need to configure the hosted zone. You can create the zone after or before configuring the DHCP options. I always prefer to configure VPC first. Here, we will create a hosted zone, name it and change the type of the hosted zone to private zone for Amazon VCP. As you can see, this DNS will be only available inside the VPC. If you have a DNS domain to use, you could create a sub domain on your parent domain. Create a public domain here and forward all the requests to the sub domain to the AWS name servers that you see here. Just to demonstrate this, I will create a test entry pointing to 127.0.0.1. Later, I will show you why this entry works. For now, imagine that we have set up a public domain for our app on RDS. We need to change our database to a multi AZ configuration. We want to change it to multi AZ and also change the subnets.

So in subnet groups, we need to specify only the subnets that we reserved for the RDS instances. Now, we need to change the RDS instance configuration. Select the instance and in instance actions, select modify. Set multi AZ deployment to yes, and I will also change the name of the instance identifier. Don't do that if you don't know how to change the database configuration on rails, 'cause it will change the end point of the database.

Select apply immediately, otherwise the changes are going to happen only during the next maintenance window. Here, we can see that the end point will change.

Confirm the modifications and we need to wait for RDS to process the changes. I will pause the recording until it's done to show you the results. Now, our instance is available again, and we can see that the end point has changed. I also changed the database configuration on the instances. Otherwise, we would have had some problems later. Let me create a new order just to show you that it's working.

Like I said, it's working. The subnets were also changed to the private subnets that we created at the beginning of this lecture. I will also terminate the two T2.micro instances that we created without auto scaling, and leave only the instances with auto scaling working. Now that we terminated the instances, let me show you the hosted zone that we created inside the working VPC. For that, I need to connect one of the instances and test the entry that we created on route 53. Let me show that.

This is one of the instances that hosts the Cloud Motors portal. I will use a simple ping command to show you that route 53 is working inside the VPC. This can be very useful when you want to define custom DNS entries on your application and manage them on route 53.

 

Lectures:

About the Author
Students
29301
Labs
7
Courses
1

Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.