This section of the AWS Certified Solutions Architect - Professional learning path introduces you to the AWS database services relevant to the SAP-C02 exam. We then understand the service options available and learn how to select and apply AWS database services to meet specific design scenarios relevant to the AWS Certified Solutions Architect - Professional exam.
Want more? Try a Lab Playground or do a Lab Challenge!
- Understand the various database services that can be used when building cloud solutions on AWS
- Learn how to build databases using Amazon RDS, DynamoDB, Redshift, DocumentDB, Keyspaces, and QLDB
- Learn how to create ElastiCache and Neptune clusters
- Understand which AWS database service to choose based on your requirements
- Discover how to use automation to deploy databases in AWS
- Learn about data lakes and how to build a data lake in AWS
So let’s say you have a DynamoDB table - how do you then interact with it? There are four main ways you can interact with DynamoDB.
You can use the AWS console, which provides a graphical interface for managing your DynamoDB tables. With the console, you can look at the data in your tables, and add and modify your data.
You can also use the AWS Command Line Interface or CLI in order to script API calls from a terminal window.
You can write code which interacts with DynamoDB programmatically using the AWS Software Development Kits or SDKs. These SDKs are available for most of the major languages, such as Java, .NET, PHP, Python, Ruby, Go, C++.
Last, you can use the NoSQL Workbench for DynamoDB, which provides a visual IDE tool where you can create, manage, and query your DynamoDB tables.
You can use any of these methods to access the DynamoDB application programming interface or API. The DynamoDB API is organized as a set of operations that you can execute on your DynamoDB tables.
Each operation has a name, a set of parameters that are required for the command to be complete, and a set of outputs that are sent back in response.
For example, take the CreateTable API call. As the name suggests, this API call creates a DynamoDB table. However, to run this command successfully, you have to specify certain parameters, such as the table name and how many RCUs and WCUs you’d like to provision for that table. If you form the command correctly, you get an output sent back in response acknowledging that the operation was successful.
In this lecture, we’ll be talking about the three main categories of operations using the DynamoDB API.
The first set of operations are called control plane operations. These operations let you manage the DynamoDB tables in your account. So, if you want to list out the tables you have in your account, you could use the ListTables API call. If you want information about a particular table, you can use the DescribeTable API Call. And if you want to modify or make changes to your tables, you can use the CreateTable, UpdateTable, and DeleteTable API calls.
Then you have the next category of operations, which are data plane operations. These enable you to perform create, read, update, and delete (or CRUD operations) on your DynamoDB tables. For example, if you want to read data from your DynamoDB table, you have six operations you can use:
You can use the GetItem API call to read a single item. To make this call, you have to specify the exact primary key of the item you’re looking for, meaning you need to specify the partition key, and the sort key if it’s used. This call will return either 0 or 1 items, depending on if the item exists or not.
Then you have the BatchGetItem API call. This is the same thing as doing multiple GetItem API calls to read items. The only difference is the results are batched, which makes the call more efficient than doing several GetItem API calls. This can return up to 100 values.
The next option is a Query. With a query, you only have to specify the partition key, and you can optionally choose to add a sort key condition. You can also use filter expressions, which means you can request data and DynamoDB will filter through the response before returning it to you as a single result to give you the data that matches what you’re looking for.
Then you have a scan. With a scan, you don’t have to specify any keys. You are reading the entire table. Technically, with scans you can have filters too - however, it will still scan the whole table to see if the item matches the filter expression or not. This is the most expensive call out of all of them.
Finally, you can also use PartiQL to perform gets on your data, by using the API calls ExecuteStatement and BatchExecuteStatement.
For modifying data, you use the PutItem API call to store a single new record, the UpdateItem API call to modify, and the DeleteItem API call to delete a single record. If you need to make many writes at once, you can use the BatchWriteItem API call. You can also use PartiQL to perform these operations, by using the same API Calls you use to read, ExecuteStatement, and BatchExecuteStatement.
Finally, you have Transactions operations. For ACID compliance, you can use the built-in API calls to read and write called TransactGetItems and TransactWriteItems, respectively. Or you can use PartiQL by using the API call ExecuteTransaction.
While I hit most of the major DynamoDB API calls in this lecture, I did not hit all of them. If you’d like more information for any of these DynamoDB operations or you’d like to see all the DynamoDB operations, you can find all the details you need in the AWS API reference. That’s it for this one - see you next time!
Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.