Amazon introducing some interesting new DynamoDB features
DynamoDB is a managed NoSQL service in the AWS family. Both the key-value and the document data model are available, and other DynamoDB features in...Learn More
DynamoDB and Cloudwatch monitoring: Amazon Web Services recently introduced a feature to integrate its DynamoDB and CloudWatch components. This feature will allow you to collect and analyze performance metrics. In this post, we’ll cover everything you need to know to get started using them for monitoring AWS resources.
Amazon DynamoDB is an entirely managed NoSQL database that allows you to store and retrieve any quantity of data and any level of traffic. With DynamoDB, you can create tables that are easily scaled up or down with no loss in performance. It includes the following features:
Amazon CloudWatch monitoring is developed to manage and monitor Amazon Web Services (AWS) resources.
CloudWatch allows you to collect and track AWS metrics. To do so, you can define the rules and set threshold values for your metrics. You can create alarms in CloudWatch to be notified of when thresholds have been reached (we’ll show you how later in this post). CloudWatch gathers information about application performance, resource utilization, and its operational health.
Used together, CloudWatch monitoring takes the data from DynamoDB and processes it into readable metrics. Follow these steps to retrieve CloudWatch data for a table created in DynamoDB from the AWS management console:
All of the available DynamoDB metric options will appear in the ‘viewing list.’ You can use the checkbox beside the resource names and the metrics to select or deselect any metric in the results window. The graphs that show selected metrics are displayed at the bottom of the console.
You can also get results from the table in DynamoDB through the Command Line Interface:
How to Set Up CloudWatch Monitoring and Alarms
CloudWatch alarms provide real-time notification of events in your AWS resources. You will need to use the DynamoDB console to set these alarms. Then, follow these steps:
Once the alarm has been created, you can add the trigger condition in the ‘whenever’ text box. To set limits, you can use next text box which concerns the average per second. You can also set a specific time period for the alarm.
DynamoDB refers metrics to CloudWatch only when they have a non-zero value. For example, when a request generates an HTTP 400 status code, the UserErrors metric will be generated. If there is no HTTP 400 status during a specific period, no metric will be provided for UserErrors. Also, Amazon CloudWatch has different time intervals for DynamoDB metrics. Some metrics have a one-minute interval whereas all others have an interval of five minutes. The following metrics are available from Amazon DynamoDB:
There must be a logical condition to be evaluated before proceeding with any operation. If this condition results, false value ConditionalCheckFailedRequest is incremented by one.
You can get total read capacity consumed for a table and its global secondary index.
Provides write capacity units consumed within a period of time. You can track throughput as a provision.
Provides the number of write capacity units that are consumed while inserting a new global secondary index in a table.
This metric gives a percentage of completion of a new global secondary index in a table.
Provides a count of write throttle events recorded when the new global secondary index is added in a table.
This metric gives a count of provision read capacity for a global secondary index or table.
This metrics provides a count of provision write functions for a global secondary index or table.
This metric increases ReadThrottleEvents by one if the requested invent is throttled.
This metric returns a count of bytes from GetRecords operations during the specified period.
Returns the count of items from Query or scans operations during a specific period.
This metric provides stream records return by GetRecords operations during the specific period.
This provides the time elapsed for successful requests and a count of successful requests.
This metric request to DynamoDB generates an HTTP 500 status code during a specific period.
If any event of a request crosses the throughput limit as provisioned in advance, the ThorttleRequests metric is increased by one.
The UserErrors metric request to DynamoDB generates an HTTP 400 status code during a specific period.
This metric makes a request to DynamoDB when write capacity units for a table or a global secondary index exceed the provisioned write capacity.
Successful monitoring requires solid metrics. With the integration of these two technologies, you can use CloudWatch to conveniently monitor tables created in DynamoDB.
DynamoDB tables are distributed among many partitions. To get the best results, you need to design the best tables and applications so that the operations of reading and writing will be spread evenly across DynamoDB tables. You must avoid factors like I/O hotspots as they can degrade performance. All of the items of DynamoDB are limited regarding their size, but you can add limitless items in a table.
CloudWatch monitors AWS products for their essential functions, or it can also monitor them in detail. For basic monitoring, CloudWatch sends data points in five-minute intervals, and for detailed monitoring, you can see data points every minute. You will get most of this integration by applying a thorough understanding of all of the DynamoDB metrics explained above. Good luck!
AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in the cloud.As the market leader and most mature p...
The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...
In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...
Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...
Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...
There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...
Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...
One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...
A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...
The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....