CloudAcademy
  1. Home
  2. Training Library
  3. Cloud Computing Fundamentals
  4. Courses
  5. LPIC-1 102 Linux certification - Linux shells, scripting, and databases (1 of 6)

Databases: the AWS way

The course is part of this learning path

Contents

keyboard_tab
Introduction
play-arrow
Start course
Overview
DifficultyIntermediate
Duration37m
Students819

Description

This sixth of the eleven courses in our Linux certification series introduces material from the LPIC's second (and final) exam required for your LPIC-1 Server Professional certification. This course covers three rather distinct topics, though properly understanding each is critically important for managing Linux systems:

  • Managing shell session environments
  • Working with Linux scripts
  • Working with Linux databases (MySQL)
  • Migrating a MySQL database to Amazon's Relational Database Service

Explore the entire Linux certification series.

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Migrating a MySQL database to the AWS Relational Database Service (RDS)

In this video, we're going to spend some time exploring MySQL the AWS way. We're going to learn how to launch and then login to an Amazon RDS hosted MySQL instance and then lift and drop a locally created database into it.

Hosting your database operations on AWS's relational database service has some enormous advantages. The hardware is fully managed by AWS so you don't have to worry about server failure, software updates, data replication, and backup and to a large degree even security. Once you've set things up, there's really nothing for you to do besides focus on your data.

Part of the stability of AWS databases is the way they structure their hardware infrastructure for you. If you choose to create a multi-AZ deployment, your data is spread across more than one geographic availability zone to ensure that even if something should cause one to crash the others will survive and continue to provide full service.

To make this work we'll make use of a DB subnet group that we'll specify during configuration. Let's create our DB instance. From the RDS menu, check to make sure we're in the right AWS region then click on "Launch DB Instance." We'll select "MySQL." We'll say, "Yes," to be given the option of multi-AZ deployment and provisioned IOP storage and click "Next Step." We'll choose the most recent MySQL release version, select a db.t2.micro as our instance, say "Yes" to multi-AZ and leave the storage options as default. We can change any of these hardware configurations later - something that would be unspeakably complicated in a local data center. We'll use "New DB" as DB identifier, "Admin" as a master username, and a password that's at least eight characters long.

Now we'll place our new DB inside our default VPC, select the "MySQL group," subnet group that we have available in our particular VPC and leave the instance privacy setting as not publicly available. For our security group, which will control all traffic into and out of our RDS instance, we'll select an existing group and come back to it a bit later to make sure that it's set up the way we want. We won't create a database right now and we'll leave the rest of the settings as they are. Note that MySQL will use port 3306 for communications.

Now we'll click on "Launch DB Instance" and head back to the RDS dashboard to see what's happening. Since AWS is still creating our instance, it doesn't yet display an endpoint. While we're waiting, we'll click on the "Configuration Details" tab and click on "Our Security Group." We'll click on the "Inbound" tab and then on the "Edit" button. Here we'll add a new rule that will allow MySQL traffic into the instance from my IP address. Note how AWS automatically filled in the 3306 port. You should remove all other rules, because in our case at least, there's no other traffic that should be allowed in. We'll save the rule.

Once our DB instance is up and running, we're shown the endpoint. It's important to ignore the :3306 at the end. As for what we're doing, it'll just get in the way. Now let's start a shell in our local machine and connect directly to our RDS instance. Let's create a new database. That's enough for now. We'll exit the session.

Now back on our local machine we'll create a full backup of the communications database we were working on for the last video using the mysqldump program and pipe it to the file communications.sql. It really amazes me just how simple this next step is. We're going to upload the dump of our local database into the communications database on RDS using one simple command "-h" indicates that the next value is the endpoint address of the database host, "-u" is followed by the master username we chose, "-p" will prompt for a password. Communications specifies the remote database we want to work with and the arrow character will apply the .sql file to the working database. That's it.

Now to make sure that everything actually worked we'll log in to our RDS instance again just as we did the first time. We'll load the communications database that we created before. And we'll use "show" to display the database tables which are those we've created for our local deployment.

About the Author

Students15425
Courses17
Learning paths2

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.

Covered Topics