The course is part of these learning pathsSee 3 more
CloudWatch is a monitoring service for cloud resources in the applications you run on Amazon Web Services. CloudWatch can collect metrics, set and manage alarms, and automatically react to changes in your AWS resources. Amazon Web Services Cloudwatch can monitor AWS resources such as Amazon EC2 instances, DynamoDB tables, and Amazon RDS DB instances. You can also create custom metrics generated by your applications and services and any log files your applications generate. You’ll see how we can use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance and operationally you’ll use these insights to keep applications running smoothly. This course includes a high-level overview of how to monitor EC2, monitor other Amazon resources, monitor custom metrics, monitor and store logs, set alarms, graph and view statistics, and how to monitor and react to resource changes.
- Systems Admins
- Operational Support
- Solution Architects working on AWS Certification
- Anyone concerned about monitoring data or AWS recurring billing
- AWS Console Login
- General knowledge of how to launch an Elastic Compute Cloud (EC2) instance on either Linux or Windows
- View CloudWatch Documentation at https://aws.amazon.com/cloudwatch/
- An operational EC2 (Windows/Linux)
- Monitor EC2 and other AWS resources
- Build custom metrics
- Monitor and store log information from Linux instances
- Set alarms for metrics to take action on an instance or auto-scaling group
- Create a dashboard to monitor EC2 instances
- React to load to trigger auto scaling horizontally within AWS.
This Course Includes
- Over 90 minutes of high-definition video
- Console demos
What You'll Learn
- Course Intro: What to expect from this course
- Getting Started: How to launch an EC2 instance
- Building a Dashboard: How to take the metrics from the instance and create a dashboard
- Monitoring EC2 Instances: How and why you should be monitoring the environment in Amazon Web Services
- Sending Log Files to Cloudwatch: A lesson on the importance of sending log files to Cloudwatch
- Alarms: How to specify alarms
- Course Conclusion: Course summary
Welcome to CloudWatch, the conclusion. My name is Michael Bryant and thank you for taking this course, I hope you found it informative. Let's review some of the things that we've gone over.
CloudWatch is used for tracking metrics, trending, setting alarms, and it provides us a consolidated resource logging. There's a couple of key points I'd like to make as we bring this to conclusion. It's very important to always review the current Amazon AWS Whitepapers. They're going to provide you the most up to date information, tips and techniques for completing the actions you want to perform.
Also, remember to set expirations on logs pushed to CloudWatch. This is particularly important because we identified that these logs are stored on S3 and you will be charged for the storage of those logs. Just because we set up expirations on logging in CloudWatch, these are copies of the files that are stored locally on your server, they still have to be managed.
In our example that we used, so in the example that we used earlier our T2 micro instance is collecting log files and there's no management tools that were implemented, there's no log culling, if you will, on the actual server to trim those log files. Accordingly, if we constantly allow Apache to continue to log the access log and we get a lot of traffic on our site it could very well run the server out of disc space. In a production environment you should account for this, and determine how often you want to cull these logs.
Remember that it's equally important to make sure you check the price class and service offerings per region, and whether you need to use basic or enhanced monitoring, but it's also important to make sure that you remember to set up logging and point it to the correct region in the AWS logs.com file.
We looked at that earlier and I purposely used the example of sending logs to the US West One region, or northern California. We modified the AWS logs.com file, which is defaulted at US East One to the US West One region to demonstrate that. If you install based on the Whitepaper and use the default AWS scripts provided, it will set up logging to US East One. If that's where you choose to log that's great, but if you enable logging and you can't find your log files in the region, I suggest you take a look at AWS logs.com and then also remember to delete any logs you've sent to the incorrect region, as you will pay for S3 storage.
In the preceding lessons, I believe that we've answered our key questions, and we now know why we're monitoring. We're monitoring to make sure that our customers can always get to our website with our route 53 DNS monitoring. We know what our performance and costs are, because we evaluated our regional costs for monitoring and logging, and we're taking a look at how our servers are holding up, including disc, network, and memory utilization.
We set up a dashboard which is showing us trends. We'll be able to see over time if we need to scale our environment, and we also have the ability to look at troubleshooting and radiation. We can see where a problem occurred because we'll get notifications if there is an anomaly via the simple notification service, or SNS. We also now have the ability to detect or prevent this problem in the future because we have our trends, our notifications, and we know that our servers are online.
In order to eliminate any unnecessary charges that we may have created in this course, I recommend that you delete the snstopicserver_managers that we created. You should terminate the instance we created or any unnecessary instances you created for this course. You should delete the log files that we sent to CloudWatch. And it's also a good practice to delete the dashboard as well. I recommend that you delete the sns topic first, otherwise you're going to get a bunch of emails assuming that you subscribed to the topics as they were sent to you from Amazon.
Amazon creates and pushes to their production environment several hundred updates, changes, and new features per year. This often equates to a new feature or features coming out every day. While it's a challenge to stay current on all the AWS documentation, before implementing CloudWatch again, I'll recommend that you read the current Whitepapers on CloudWatch and its subsequent documentation. You can certainly use this course as a blueprint for how to deploy CloudWatch in your environment.
It's impossible for this course to cover every iteration and some of the automated features that can be kicked off by CloudWatch. Several times in this course I've mentioned AutoScaling. AutoScaling is a great opportunity for CloudWatch as you can use alarms that we created to cause AutoScaling to add more servers to your environment. Rather than getting a message at three am that you need to launch a new server and configure it, CloudWatch will then tell the AutoScaling group to use the launch configuration that you specified to launch your server, and even add it to load balancers and begin balancing traffic. This is an excellent use of CloudWatch alarms and something that I do frequently in real production environments.
I hope that you've enjoyed this course and found it informative. Be sure to check out the other offerings from CloudAcademy and thanks for joining me for this lesson on CloudWatch.
Network engineer and program analyst.