1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Database Fundamentals for AWS - Part 1 of 2

DEMO: Creating an Amazon Neptune Database

The course is part of these learning paths

DevOps Engineer – Professional Certification Preparation for AWS
course-steps
35
certification
6
lab-steps
18
quiz-steps
2
description
3
Solutions Architect – Professional Certification Preparation for AWS
course-steps
48
certification
7
lab-steps
19
quiz-steps
4
description
2
Working with AWS Databases
course-steps
5
certification
3
lab-steps
4
Certified Developer – Associate Certification Preparation for AWS
course-steps
30
certification
6
lab-steps
22
description
2
Fundamentals of AWS
course-steps
7
certification
7
lab-steps
6
description
2
more_horizSee 4 more
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 8m
Students2000
Ratings
4.4/5
starstarstarstarstar-half

Description

If you're new to AWS, it can be a little daunting to determine which database service is the right option for your solution. This is the first course in a two-part series on database fundamentals for AWS, which will help you make the right decision when choosing an AWS database service.

This course covers Amazon RDS, Amazon DynamoDB, Amazon ElastiCache, and Amazon Neptune. As well as getting a theoretical understanding of these, you will also watch guided demonstrations from the AWS platform showing you how to use each database service.

If you have any feedback relating to this course, feel free to share your thoughts with us at support@cloudacademy.com. The second course in this two-part series covers Amazon Redshift, Amazon Quantum Ledger Database, Amazon DocumentDB, and Amazon Keyspaces. You can find that course here.

Learning Objectives

  • Obtain a solid understanding of the following Amazon database services: Amazon RDS, DynamoDB, ElastiCache, and Neptune
  • Create an Amazon RDS database
  • Create a DynamoDB database
  • Create an ElastiCache cluster
  • Create an Amazon Neptune database

Intended Audience

  • Individuals responsible for designing, operating, and optimizing AWS database solutions
  • Anyone preparing to take the AWS Certified Database Specialty exam

Prerequisites

To get the most out of this course, you should have a basic understanding of database architectures and the AWS global infrastructure. For more information on this, please see our existing blog post here. You should also have a general understanding of the principles behind different EC2 Instance families.

 

Transcript

Hello, and welcome to this lecture, which is going to be a demonstration on how to create an Amazon Neptune database. So, I'm at the AWS Management Console, and we need to go down to Databases and then select Amazon Neptune. If you don't have any Neptune databases created as yet, then you'll be presented with this screen. Simply launch Amazon Neptune. And now we're down to the configuration screen of our database.

Now, the first option we have is the engine options. So this is essentially the version of Neptune that you can use. Just going to select the latest one, which is the default. Then down to Settings, this is the database cluster identifier. So this will be the name of our database cluster. I'm just gonna leave it as a default as database-1 for this demonstration. But just be aware that the name must be unique within the region for all your database clusters that you own.

Next, we have templates. So we have either production or development and testing. The production one uses defaults for high availability and fast, consistent performance, whereas the dev and test won't have as many high availability features. So, let's go with the production template.

We can then select our database instance size. So we have a number of different instances here, all offering different quantities of VCPUs and memory, etc. So I'm just going to select the smallest one for this demonstration.

Down to Availability & durability, we can either create a multi-AZ deployment of Amazon Neptune, or have a single availability zone deployment. If you want high availability, then you'd create the read replica in different zones. For this demonstration, I'm just going to select no to multi-AZ.

When we come down to connectivity, we can select our VPC that we want to deploy the Neptune database in, so select your VPC. If we expand this additional connectivity configuration, we can then select a subnet group which will define which subnets within your VPC that Amazon Neptune can use. In this VPC, I only have the default subnet group. Then we have our VPC security group which acts as a virtual firewall, which will define what traffic can talk to your database and over which ports as well. So you can select an existing security group, or you can create a new security group. I'm just going to select the default security group just for this demonstration.

As we can see, it's added it in there. You can then decide which zone you'd like to place your database in, if you have a preference, that is, or you can select no preference. And also the port that it will use for application connections. If you'd like to, you can add a tag for your database.

And then finally, if we look at additional configuration, we have a number of database options, so we could provide a name for the actual instance of your database. Again, we have parameter groups that we've discussed in the previous lectures, and we have a parameter group for the actual cluster itself, and also for the individual database.

For authentication, you can enable IAM database authentication as well, which will manage access through users and roles. For security purposes, I recommend enabling that. You can define a failover priority, and the failover priority allows you to define on your Neptune replicas which one should be promoted to be prime instance should your prime instance fail. And the priorities range from zero for the highest priority to 15 to the lowest priority. And you can configure each replica with a different priority. I'll just leave that as a default, no preference.

Down to Backup, these are our automatic backups. And again, we can choose our attention period, which is essentially the number of days that it should keep its automated backups. Encryption is enabled by default through KMS, and here it's using the AWS RDS KMS key as its default key, which is fine. But if you'd like to select a different one, then you can select any CMKs that you have created. As we can see here, we have one I've created called MyCMK. But I'll just leave it as the default AWS managed KMS key. If you'd like to export your logs, then you can tick the audit log and have that exported to CloudWatch Logs for further analysis.

When it comes to maintenance, you can enable auto minor version upgrade, and this will automatically upgrade to any new minor versions as they are released. And again, we have a maintenance window. So you can select a predefined window where you'd like any maintenance to be scheduled, or select no preference. And finally, you can enable deletion protection. And this simply stops anyone from going ahead and accidentally deleting your database. If you want to delete the database, then you have to modify the settings, uncheck that check box, and then you can delete the database. So like I say, it prevents any accidental deletion, you have to do it with intent. Just gonna leave that unchecked. And then once you've set all your settings, simply select Create database.

As we can see, it's creating our database. If we went for the multi-AZ option, then we'd have another instance under this cluster as well. Now, that will just take a few minutes to start up and create. We can now see that the cluster is available, but it's still creating our database instance. We can now see the availability zone that it's placed that instance in, so it's eu-west-1b. Okay, we can now see that the database instance is also available.

So let's just take a quick look at these. So firstly, the cluster. As we can see here, we can see the connectivity of the cluster, so we have the cluster end point that I mentioned in the previous lecture, and also the reader end point as well.

For the monitoring, we can see some of the CloudWatch metrics that have been used. And then if we look at logs and events, we can see any logs that are being generated. Configuration is essentially the different options that we selected during the creation of the cluster. Similarly with the maintenance and backups, any maintenance or backups that are scheduled. And also our tags. And it's a similar story for the actual instance itself. So let's take a look at that.

We have the same options, connectivity and security, so we can see which VPC it resides in, the subnet group, and the subnets associated with that subnet group. We have the end point. Again, different CloudWatch metrics that are being captured. Any logs and events. Configuration, again, this is a lot of the configuration options that we defined during its creation, such as the KMS key and the instance size, etc, and any maintenance windows that have been scheduled.

So it's as simple as that, it's very easy to set up, and many of the configuration options are similar to the previous databases that we've also set up within this course.

Lectures

Course Introduction - Amazon Relational Database Service - DEMO: Creating an Amazon RDS Database - Amazon DynamoDB - DEMO: Creating a DynamoDB Database - Amazon ElastiCache - DEMO: Creating an ElastiCache Cluster - Amazon Neptune

About the Author
Students108056
Labs1
Courses90
Learning paths61

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 80+ courses relating to Cloud reaching over 100,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.