1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. AWS Solutions Architect Associate Level Certification Course: Part 3 of 3

RDS: Advanced Functions

RDS: Advanced Functions
Overview
Difficulty
Advanced
Duration
1h 10m
Students
1617
Description

AWS Solutions Architect Associate Level Certification Course - Part 3 of 3

Having completed parts one and two of our AWS certification series, you should now be familiar with basic AWS services and some of the workings of AWS networking. This final course in our three-part certification exam preparation series focuses on data management and application and services deployment.

Who should take this course?

This is an advanced course that's aimed at people who already have some experience with AWS and a familiarity with the general principles of architecting cloud solutions.

Where will you go from here?

The self-testing quizzes of the AWS Solutions Architect Associate Level prep materialis a great follow up to this series...and a pretty good indicator of your readiness to take the AWS exam. Also, since you're studying for the AWS certification, check out our AWS Certifications Study Guide on our blog.

Transcript

Let's launch a highly available RDS MySQL instance in the VPC. By highly available, I mean a database service that can survive network or hardware disruptions. We're going to use my account's default VPC and VPC subnets, but we'll use them in a way that will spread our data across multiple availability zones, known as Multi-AZ.

Here's a representation of a possible VPC configuration. We needn't concern ourselves too much with specific IP addresses or even route tables and internet gateways. RDS will take care of all that for us. We do, however, need to be conscious of the larger structure of our network. So for instance, within the VPC, there are two availability zones enclosed by dotted line rectangles, each one using a different subnet. Each of our two database instances is hosted in a different availability zone so that even if one should ever go down, the second will still be available to us. Notice also the security group attached to each MySQL instance is connected only to the security groups of our web application servers and not to the routing table that heads out to the internet. This is because, as is true in most cases, we don't want there to be any inbound internet traffic reaching our databases, in order to maintain a higher level of security. Let's get started. The first thing we'll need to do after confirming that we're in the correct AWS region, is to create a DB subnet group.

From the RDS dashboard, click on subnet groups. Then on create subnet group. We'll give our group a name, say MySQL group, and a description. We'll select a VPC, there's only one choice in our case, and then add two subnets, one in each of two availability zones. First, we'll select US East-1A, and click on the drop-down subnet menu. There's only one subnet available in this zone, so we'll use it. We could always add others should we need to host application servers, for instance.

Clicking add will assign this subnet to our group. Now, let's do the same thing with the availability zone US-East-1B and its subnet, clicking add again. If we needed more failovers for our database servers, we could simply click add all the subnets to automatically associate all four subnets that happen to exist in our VPC. Now we're ready to create our DB instance. Click on instances and then launch DB instance.

We'll select MySQL, but not before wondering where the new AWS Aurora MySQL compatible DB is hiding. Apparently, Aurora is still only available for preview on request, so we'll have to wait a bit longer. We'll say yes to be given the option of multi-AZ deployment and provisioned IOPS storage, and click next step. We're given the option of choosing a release version for our MySQL DB.

Normally, of course, we'd go for the latest release available. But there might be times when we'll want to import existing data that could crash using later releases, so we appreciate the choice.

We'll go with modern right now. We'll select a db.t2.micro as our instance, but if we were providing data services for heavy workloads, we might change that to a more robust configuration. In any case, we can always upgrade later if the need arises. We'll say yes to multi AZ and leave the storage options as default.

Obviously, five gigabytes might be pitifully small for most serious deployments. Naturally, you should research how much space you're going to need before starting a project, but one of the nice things about AWS is that you can apply real-world experience on the fly, and scale up or down according to actual conditions. We'll create a DB identifier, which must be unique for this account.

MySQL will do for now. I'll choose admin as a master username and enter a password. Since the password must be at least eight letters, I know that the temptation exists to use the easiest 8-letter word we know, password. We should of course resist that temptation, and use a password that does not appear in a dictionary, contains mixed-case letters, numbers, and non-alphanumeric characters and isn't the password that you use elsewhere.

Now we'll assign our DB to a VPC. I have only one VPC on this account, so the choice won't be difficult. But normally, you should be very careful to make sure that you pick the right VPC. We'll select the MySQL subnet group and leave our instance as not publicly available, since for security reasons, we don't want external traffic finding its way in. Now we should give some thought to our security group. Obviously our DB instance exists to serve data to a web app or EC2 server. So if we don't provide some kind of access, we aren't going to accomplish that much. So let's create a new security group especially for this DB instance.

From the VPC security groups page, click create security group and give a name and a description for the group. Then select the VPC we're working with. Click on inbound rules in the details pane below it, click edit. We'll select all traffic as a type, and then click once inside the source box. We're presented with a choice of all existing security groups. We'll choose the group that's associated with the server instances that will meet our data. Click save. We can now return to our RDS configuration, and our MySQL security group should appear among our choices.

We'll now give the database a name, and leave the port and parameter and option groups as default. Changing the port, by the way, can be a fairly powerful security choice, since hackers will normally anticipate MySQL traffic on Port 3306. The problem is that, unless you're very careful, your MySQL clients will also expect that port, and complain loudly if they don't see it there. You can set the timing for automatic data backups. The more sensitive and valuable your data is, the more often you'll probably want it backed up. You can also select a time window during which backups will take place. This can be useful if you have certain hours each day, in the middle of the night perhaps, when your traffic is traditionally light, so data operations like backups will be less disruptive.

The same idea could apply to minor version updates. Finally, click launch DB, and we're done. After a few minutes, once the instance setup is complete, our all-important end point address will be displayed in the RDS dashboard.

About the Author
Avatar
David Clinton
Linux SysAdmin
Students
12538
Courses
12
Learning Paths
4

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.