1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Deploying a Simple Business Application for Pizza Time

Adding More CloudWatch Operations

play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 44m
Students1034

Description

In this Group of lectures we will introduce you to the Pizza Time business and system requirements. Pizza Time requires a simple ordering solution that can be implemented quickly and with minimal cost, so we will do a hands deployment of version 1.0 of our solution. To achieve that we are going to use a single region and deploy application using AWS Elastric Beanstalk. We will implement a simple database layer that will be capable of delivering on these initial requirements for Pizza Time. 

It turns out that the Pizza Time business is a success and the business now wants to target a global audience. We discuss and design how we can increase the availability of our initial Pizza Time application, then begin to deploy V2 of Pizza Time - a highly available, fault tolerant business application. 

 

Transcript

Hi and welcome to this lecture.

In this lecture we're going to pick where we left in the last lecture and we are going to define units and we are also going to talk about aggregation, how CloudWatch aggregates its data. Then we are going to have a demo on how to create a custom metric, we are going to send a city metric to CloudWatch, we are going to also install the custom monitoring scripts for Linux instances and we are going to the AWS CLI to change the alarm state of the alarm that we created in the last lecture.

So, this is more or less the data that we need to send every time we send a new custom metric to CloudWatch. We need to define a metric name, this is mandatory, so we always need to define that.

We can define some dimensions for our metrics, but this is not mandatory, if we don't specify a time-stamp, AWS will take the current time, so this is not a mandatory parameter.

We need to define a value to our metric, the statistic values, we are going to talk about this later on. The thing that we haven't defined yet is the unit, so we also need to define the units of our metric, so let's take a look on units.

Units is the classification of our value inside a metric. These are the most common units that we can have, second unit, bytes, bits, percent, count, bytes per second, bits per second, counts per second or none. None is the default value, so if we don't specify anything, AWS will use none as our unit and the complete list of units can be seen in here.

So, we can also specify other values following more or less the same pattern. If we are thinking about bytes, we can specify megabytes, kilobytes, terabytes and so on.
And we can specify bytes per second, kilobytes per second and so on.

I said that the time stamp is not mandatory in each of the paramater and if we don't specify one, AWS will use the current time. However, the time stamp can be up to two weeks in the past and up to two hours into the future. So, although we can specify a time stamp, the time stamp must live within this period.

Talking now about aggregation, CloudWatch aggregates the data to a minimum granularity of one minute. What it means is that AWS use one-minute boundaries when aggregating data points, so if we are sending data each second to CloudWatch for example, CloudWatch will aggregate all that data within one minute and when we try to access that data, we will receive a statistic of the whole minute. In short, everything that we send to CloudWatch within a minute, we'll leave inside a single statistic. That's the most common way to send data to CloudWatch. When you specify a metric name, and in “space” you specify a value and a time stamp, (and, again, in this case, we are publishing single-data points), all these four metrics that we are sending in this example are going to live inside a single minute because AWS will aggregate that data for us.

But imagine that you have a monitoring system of any kind and if you're going to publish single-data points you would be sending data each millisecond to CloudWatch and that would be a problem because you would have to make a lot of API calls per second. What you can do is publish statistic sets.

Instead of having CloudWatch calculating these statistics for us, we are going to calculate the statistics on our own and we are going to send the statistic value to CloudWatch. By doing this, we reduce the amount of times that we call the put-metric-data API, so instead of sending metrics each millisecond, we could send metrics each minute or each five minutes as long as we send the statistic values to CloudWatch.

And for statistic values, as I showed a few slides ago, we need to specify a SampleCount, a sum, a minimum and maximum values. Let's now go to the AWS console and have a hands-on demonstration.

I will use the same instance that we created to launch the Pizza Time application to send some custom metrics to CloudWatch. Let's go to the EC2 console... And in here, we can see the instance that is running our application. In order to send data from the instance to CloudWatch, you'd either have to configure an IAM user with access to CloudWatch and use the access keys and configure the AWS CLI as we did in our workstation machine with those credentials to send data to CloudWatch or the best practice in this case is to use roles. And since we used Elastic Beanstalk to launch this instance, Elastic Beanstalk already created a role for us. And let's take a look at this role.

If we take a look in the last managed policy, we can see that we already have set the permission to send metrics to CloudWatch. This is the API call that we want to make, we want to use the put-metric-data.

We really don't need to do anything for this particular instance, we only need to access it and send some data to CloudWatch, so let's open up the terminal. And what I'll do here, is I will connect to this instance, so I need to select my key, my key pair. And I need to take Elastic IP from this address. I'm expecting to receive an error which is not related to CloudWatch, it's just something that is good for you, for troubleshooting.

OK, AWS says unprotected private key file. Every time you create a new key pair as we did, you need to protect that file. The file is too open, so we need to fix that. And to fix that, we change the permissions of the file and we can try again and this time I hope to receive a success message. So, everything worked as expected. Remember, every time you receive this warning, unprotected private key file, you need to change the permissions for your key pair file.

I created a few monitoring scripts for us to use in this course and the monitoring scripts are available in the GitHub repository of this learning path. And to access GitHub, we need install Git.

Now, let's enter in the Pizza Time folder. And in here, we have a folder called scripts. And the file that we are looking for is called send-custom-metrics.

First let's just give a custom permission to this file. And the file is very simple. We are defining a metric, in this case I'm using the domain that I created. If you want to test using this same script and you haven't created a domain, you can put the... you can put the Elastic Beanstalk environment URL in here and that will work just the same.

And in here we are going to load the page and we are going to send a load time to CloudWatch. We are using the put-metric-data command, we are specifying a metric name which is called LoadTime. We are specifying a new name space called Pizza Time, the value will come from the command that we are specifying here, the unit for this particular metric is seconds and I'm specifying a dimension in here which is environment, is V equal to Prod. I will run the script. And now if we go to the CloudWatch console, we should be able to see something different in the console, so let's go there.

In here, as you can see, we now have a new name space, a new name space selector called custom-metrics and if we select that, we can see the Pizza Time related metrics, we created a new name space. This is a new dimension and here is our metric. We have one data point at this time. And as I showed in the slides, if we keep sending data to CloudWatch within one minute, CloudWatch will aggregate all this data into a single minute, so you need to adjust your scripts if you want to have the LoadTime metric being sent more than once per minute.

It's a best practice to send the statistics instead.

Also, if you look at the scripts folder, we have a file called install custom monitoring and if we look at this file, what we are doing basically, we are going to the root folder, we are creating a new folder in there, we are entering in that folder and downloading the CloudWatch monitoring scripts from a script bucket provided by AWS.

We are going to unzip that file and then we are going to insert a new cron job that will run each five minutes on our machine and this cron job will send information about our memory utilization and other custom metrics. These metrics are not monitored by default by CloudWatch. If you want to have this information, if you want to know about your memory utilization, your free disk space, you'll need to either create your own scripts or use the ones provided by AWS.

So, again I will change the permission for this file. And now I will run it, but I need to run it as sudo. Now, we have run our script, the monitoring scripts are installed and we have a new cron job, so we just need to wait a few minutes until the instance starts sending some data to CloudWatch. I will stop the video and get back once we have a few new metrics in our CloudWatch console.

OK, our monitoring scripts are already running and if we take a look in here, we have a new name space, it's called Linux System and inside this name space, we can see a few new metrics. We can see the disk space used by our instance and we can also see the memory utilization among others.

Remember, the memory utilization and the disk space utilization is not monitored by default by CloudWatch. If you need to have this information you need to either create your monitoring scripts or you can use the monitoring scripts already provided by AWS.

Let's now play a little bit with our CloudWatch alarm. I will change the state of the alarm that we created in the last lecture and I will use the AWS CLI to do this, and to use AWS CLI to change the state of the alarm, we need to have it configured with proper credentials, we need to have permissions to change the alarm state and once we have that, we can simply type, “aws cloudwatch set-alarm-state” and we need to provide the name of our alarm. So, it will be alarm-name “High CPU Alarm”.

We need to provide the state value. And this value must be one of the states that the alarm can have, we covered that in the last lecture, in this case I want to change it to Alarm.

And we need to provide a reason, so, state-reason will be for testing. That is very often too fast, so we can't see these in the AWS console, we can see that I already received a notification because we set that in our alarm, so it's working, and if you want to see the status change of this alarm, we can take a look at the history tab, and if we select in here, we can see that we changed the state of the alarm for testing, the reason that we specified. And since actually everything is OK with this metric, CloudWatch will return the state of this alarm to OK after a little while just because everything is OK and we changed the state of this alarm just for testing.

About the Author

Students13399
Labs11
Courses6

Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.