image
DEMO: Creating an Amazon Neptune Database
Start course
Difficulty
Beginner
Duration
1h 55m
Students
5462
Ratings
4.9/5
starstarstarstarstar-half
Description

Please note that this course has been replaced with a new version that can be found here: https://cloudacademy.com/course/databases-saa-c03/databases-saa-c03-introduction/ 

 

This section of the Solution Architect Associate learning path introduces you to the AWS database services relevant to the SAA-C02 exam. We then understand the service options available and learn how to select and apply AWS database services to meet specific design scenarios relevant to the Solution Architect Associate exam. 

Want more? Try a lab playground or do a Lab Challenge

Learning Objectives

  • Understand the various database services that can be used when building cloud solutions on AWS
  • Learn how to build databases using Amazon RDS, DynamoDB, and Redshift
  • Learn how to create Elasticache and Neptune clusters
  • Understand AWS database costs 
Transcript

Hello, and welcome to this lecture, which is going to be a demonstration on how to create an Amazon Neptune database. So, I'm at the AWS Management Console, and we need to go down to Databases and then select Amazon Neptune. If you don't have any Neptune databases created as yet, then you'll be presented with this screen. Simply launch Amazon Neptune. And now we're down to the configuration screen of our database.

Now, the first option we have is the engine options. So this is essentially the version of Neptune that you can use. Just going to select the latest one, which is the default. Then down to Settings, this is the database cluster identifier. So this will be the name of our database cluster. I'm just gonna leave it as a default as database-1 for this demonstration. But just be aware that the name must be unique within the region for all your database clusters that you own.

Next, we have templates. So we have either production or development and testing. The production one uses defaults for high availability and fast, consistent performance, whereas the dev and test won't have as many high availability features. So, let's go with the production template.

We can then select our database instance size. So we have a number of different instances here, all offering different quantities of VCPUs and memory, etc. So I'm just going to select the smallest one for this demonstration.

Down to Availability & durability, we can either create a multi-AZ deployment of Amazon Neptune, or have a single availability zone deployment. If you want high availability, then you'd create the read replica in different zones. For this demonstration, I'm just going to select no to multi-AZ.

When we come down to connectivity, we can select our VPC that we want to deploy the Neptune database in, so select your VPC. If we expand this additional connectivity configuration, we can then select a subnet group which will define which subnets within your VPC that Amazon Neptune can use. In this VPC, I only have the default subnet group. Then we have our VPC security group which acts as a virtual firewall, which will define what traffic can talk to your database and over which ports as well. So you can select an existing security group, or you can create a new security group. I'm just going to select the default security group just for this demonstration.

As we can see, it's added it in there. You can then decide which zone you'd like to place your database in, if you have a preference, that is, or you can select no preference. And also the port that it will use for application connections. If you'd like to, you can add a tag for your database.

And then finally, if we look at additional configuration, we have a number of database options, so we could provide a name for the actual instance of your database. Again, we have parameter groups that we've discussed in the previous lectures, and we have a parameter group for the actual cluster itself, and also for the individual database.

For authentication, you can enable IAM database authentication as well, which will manage access through users and roles. For security purposes, I recommend enabling that. You can define a failover priority, and the failover priority allows you to define on your Neptune replicas which one should be promoted to be prime instance should your prime instance fail. And the priorities range from zero for the highest priority to 15 to the lowest priority. And you can configure each replica with a different priority. I'll just leave that as a default, no preference.

Down to Backup, these are our automatic backups. And again, we can choose our attention period, which is essentially the number of days that it should keep its automated backups. Encryption is enabled by default through KMS, and here it's using the AWS RDS KMS key as its default key, which is fine. But if you'd like to select a different one, then you can select any CMKs that you have created. As we can see here, we have one I've created called MyCMK. But I'll just leave it as the default AWS managed KMS key. If you'd like to export your logs, then you can tick the audit log and have that exported to CloudWatch Logs for further analysis.

When it comes to maintenance, you can enable auto minor version upgrade, and this will automatically upgrade to any new minor versions as they are released. And again, we have a maintenance window. So you can select a predefined window where you'd like any maintenance to be scheduled, or select no preference. And finally, you can enable deletion protection. And this simply stops anyone from going ahead and accidentally deleting your database. If you want to delete the database, then you have to modify the settings, uncheck that check box, and then you can delete the database. So like I say, it prevents any accidental deletion, you have to do it with intent. Just gonna leave that unchecked. And then once you've set all your settings, simply select Create database.

As we can see, it's creating our database. If we went for the multi-AZ option, then we'd have another instance under this cluster as well. Now, that will just take a few minutes to start up and create. We can now see that the cluster is available, but it's still creating our database instance. We can now see the availability zone that it's placed that instance in, so it's eu-west-1b. Okay, we can now see that the database instance is also available.

So let's just take a quick look at these. So firstly, the cluster. As we can see here, we can see the connectivity of the cluster, so we have the cluster end point that I mentioned in the previous lecture, and also the reader end point as well.

For the monitoring, we can see some of the CloudWatch metrics that have been used. And then if we look at logs and events, we can see any logs that are being generated. Configuration is essentially the different options that we selected during the creation of the cluster. Similarly with the maintenance and backups, any maintenance or backups that are scheduled. And also our tags. And it's a similar story for the actual instance itself. So let's take a look at that.

We have the same options, connectivity and security, so we can see which VPC it resides in, the subnet group, and the subnets associated with that subnet group. We have the end point. Again, different CloudWatch metrics that are being captured. Any logs and events. Configuration, again, this is a lot of the configuration options that we defined during its creation, such as the KMS key and the instance size, etc, and any maintenance windows that have been scheduled.

So it's as simple as that, it's very easy to set up, and many of the configuration options are similar to the previous databases that we've also set up within this course.

About the Author
Students
229933
Labs
1
Courses
216
Learning Paths
178

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.