The course is part of these learning paths
If you're new to AWS, it can be a little daunting to determine which database service is the right option for your solution. This is the first course in a two-part series on database fundamentals for AWS, which will help you make the right decision when choosing an AWS database service.
This course covers Amazon RDS, Amazon DynamoDB, Amazon ElastiCache, and Amazon Neptune. As well as getting a theoretical understanding of these, you will also watch guided demonstrations from the AWS platform showing you how to use each database service.
If you have any feedback relating to this course, feel free to share your thoughts with us at firstname.lastname@example.org. The second course in this two-part series covers Amazon Redshift, Amazon Quantum Ledger Database, Amazon DocumentDB, and Amazon Keyspaces. You can find that course here.
- Obtain a solid understanding of the following Amazon database services: Amazon RDS, DynamoDB, ElastiCache, and Neptune
- Create an Amazon RDS database
- Create a DynamoDB database
- Create an ElastiCache cluster
- Create an Amazon Neptune database
- Individuals responsible for designing, operating, and optimizing AWS database solutions
- Anyone preparing to take the AWS Certified Database Specialty exam
To get the most out of this course, you should have a basic understanding of database architectures and the AWS global infrastructure. For more information on this, please see our existing blog post here. You should also have a general understanding of the principles behind different EC2 Instance families.
Hello, and welcome to this lecture, which will be a demonstration on how to create an RDS database. So let's get straight into it. So as you can see, I'm at the AWS management console. And the first thing I need to do is to go to RDS. Now you can find RDS under the Database category, and you can see that it's the first database option. So if you select the RDS service, and this is the dashboard for Amazon RDS. As you can see, I don't have any database instances running. So let's go ahead and create our first database.
So under the Create database section here, we can either restore an existing database from Amazon S3, from a backup, or we can create a new database. For this demonstration, I'm going to create a new database. So let's click that option. Now, firstly, we need to choose a database creation method. We can either do a standard create or an easy create. Now the standard create allows us to set more-configurable options. So for this demonstration, that's the option I'm going to select.
Then we have our database engine types. As I explained previously, we have Amazon Aurora, mySQL, MariaDB, Postgres, Oracle and Microsoft SQL Server. I'm just going to select mySQL for this demonstration. So scrolling down again, we can then select our version of mySQL, whichever version we'd like. I'll just leave it as the default option. And then we have something called Templates.
Now, depending on what template we select here will predefine a list of other configurable components. So here that's highlighted is Production and this uses defaults for high availability and fast and consistent performance. The Dev/Test template is intended for development use outside of a production environment. And the Free Tier is simply to allow you to get hands-on experience with RDS and doesn't really use many of the features. But I want to show you the full feature set. So I'm going to select Production.
Now, if we go down to Settings. Here, we can enter a database instance identifier. So this is the name of the database instance, not the actual name of a database table. So I'm happy to call it database one, just leave that as default. Now we have our credentials. Now we have a master user name to connect to the database instance, so we can have admin. We can either auto-generate a password or we can select our own. So I'll just go ahead and enter my own one.
Next, we have the option to select the database instance size. So we have our Standard classes or Memory Optimized or even Burstable classes. So I'm going to select the Standard class and using this dropdown, we can select the size of the instance that we want. And as you can see, there's a different number of VCPUs and RAM. And so I'm just going to select the smallest instance size.
Now, if we go down to Storage, we can select our storage type. So we have General Purpose or Provisioned IOPS. If we look at the Provisioned IOPS, we can define the allocated storage and then also the number of IOPS as well, which is the input and output operations per second. For this demonstration, I'm just gonna leave it as General Purpose. I'll just accept the storage defaults there of 20. Here we have Storage autoscaling. If we want to enable that or not, it's just a simple tick box. RDS will automatically scale up to whatever value we put in here. So for example, 100 gig. That will give us the flexibility of starting at 20 and scaling all the way up to 100 automatically.
Now, if we go down to Availability and durability. Here, we have a Multi-AZ deployment. So it's enabling this, will create another standby instance in a different availability zone to create high availability and data redundancy. For this demonstration, I'm just gonna leave it as a single AZ deployment. So I'm gonna select Do not create a standby instance. If we go down to Connectivity, we can select the VPC that we'd like this RDS instance to reside in. And you can see here, after a database is created, you can't change the VPC selection. If you expand this option here for additional connectivity configuration, we can see a few additional options.
So we can select a database subnet group. And this simply defines which subnets and IP ranges the database instance can use in the VPC. Have an option here, if the database should be publicly accessible or not. If you select yes, then it will be issued a public IP address and devices and instances outside of your VPC will be able to try and connect to your database, if the VPC security groups allow it. For this demonstration, I'm just going to keep it a private RDS database, so it won't assign any kind of public IP address. And only instances inside the VPC will be able to connect. Here, we can choose our security group, which will essentially define which resources can talk to our RDS instance.
Now, if we select an existing, then we can use this dropdown box here to select which security group that we'd like to use. I'm just going to select the default. I haven't set any kind of security groups up for this as this is just a demonstration, but that's where you would apply your security groups for your RDS instance. And as you can see, it's added it in there. If you'd like to deploy your RDS instance in a particular availability zone, then you can select one. If you have no preference, simply select no preference. And also what port the database will be using.
If you go down to Database authentication. We have two options here for mySQL. Password authentication. Now this will allow anyone to connect to the instance just using the database passwords. If you want it more secure, then you can use the database password in addition to verifying that the user has permissions to access the RDS database, through permissions that were assigned directly to the user or group or role. So that just offers an additional level of security. If we go down further to Additional configuration, we can configure additional options.
So here we have our database options. You can enter an initial database name that will run on your database instance. Let's just say my database. You can select a parameter group. Parameter groups is essentially a grouping of configurable parameters that operate at the database engine level. You're able to create different parameter groups that contain different settings for the same database engine, depending on your use case and how you'd like these parameters to be configured. Now the parameter group itself sits outside of the database, and this means that the same parameter group can be applied to multiple databases. So if you update the values within the parameter group, then this will update all the databases that use that same parameter group. Depending on which database engine you select, you are able to select an option group. And these option groups allow you to configure additional features to help you manage and secure your databases. Again, like parameter groups, they sit outside of the database itself.
Here we have our Backup section, so we can enable our automatic backups. If you don't want this, you can simply un-tick it, but it's pretty useful, so I tend to leave that enabled. And here we have our backup retention period. And you can select the number of days, up to 35 days. Just leave that at seven. We have a backup window. We can select a window, select in the time and the duration. So if you have a particular time that you'd like to run your backups, you can simply add in the hours and minutes, and also how long it should run for. If you don't have a specific window, you can simply select no preference. If you have any tags for your database, you can copy that to your backup snapshots as well.
When it comes to encryption, you can either encrypt your database. The default is to have your database encrypted, and then you can select your key here. Now, this is the default AWS managed key for RDS, which is used by KMS, the key management service. or you can select your own CMK, your own customer master key, if you have a different one yourself. I'm just gonna leave it as the default AWS RDS managed key.
Down here, we have performance insights. Performance insights allows you to implement a level of performance tuning and monitoring, which enables you to see and review the load on your database, and if any actions should be taken. Here we can make some additional monitoring changes. We can enable enhanced monitoring, and we can set the granularity of this monitoring from anything from 60 seconds to one second. I'll leave that as a default of 60 seconds for enhanced monitoring. And I'm just going to leave RDS to create the default role for that enhanced monitoring.
We can export our logs to Amazon CloudWatch Logs. Either the error, the general or the slow query log or all of them, or any combination. So if you want to export any of your logs to CloudWatch Logs for additional monitoring and queries then you can do so.
Then we have maintenance. We can enable or disable auto minor version upgrade. And here we can see that by enabling auto minor version upgrade, it will automatically upgrade to new minor versions as they are released. And the automatic upgrades occur during the maintenance window that we've scheduled for the database.
Now, speaking of maintenance window, we can select one here. So we can select a window. We can say on a particular day, that's good for us, Saturday at four o'clock in the morning for two hours, for example, that could be our maintenance window. So if there's any auto minor version upgrades to take place, then these will be scheduled during our maintenance window. And then finally you have deletion protection. And this simply prevents a database from being deleted accidentally.
Now at the very bottom, it has your estimated monthly costs. So we can see the cost of the database instance and also for the storage. Once you're happy with all your options, simply click Create database. And now we can see that here's our database instance, and we can see the status is Creating. And we have a message up here saying that the database might take a few minutes to launch. Okay, we now have a message that says the database has successfully been created. And it's as simple as that.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.