Course Introduction
Cost Management
Improve Planning and Cost Control with AWS Budgets
AWS Cost Management: Tagging
Understanding & Optimizing Storage Costs with AWS Storage Services
Monitoring for underutilized services in AWS
Using Instance Scheduler to Optimize Resource Cost
The course is part of this learning path
This section of the AWS Certified Solutions Architect - Professional learning path introduces you to cost management concepts and services relevant to the SAP-C02 exam. By the end of this section, you will know how to select and apply AWS services to optimize cost in scenarios relevant to the AWS Certified Solutions Architect - Professional exam.
Want more? Try a Lab Playground or do a Lab Challenge!
Learning Objectives
- Learn how to improve planning and cost control with AWS Budgets
- Understand how to optimize storage costs
- Discover AWS services that allow you to monitor for underutilized resources
- Learn how the AWS Instance Scheduler may be used to optimize resource costs
Hello and welcome to this lecture, where I’ll be launching the AWS Instance Scheduler. To do this, I’ll be using the AWS-created CloudFormation template.
To find this template, you’ll go to the instance scheduler solution page and then click “Launch in the AWS Console”. This will redirect me to the CloudFormation dashboard in my AWS account and prepopulate the template URL for me. The only thing I have to do is click next.
Here, I will choose a name, such as InstanceScheduler. Then I can scroll down to the parameters section. This is where I can customize this template to my environment. Let’s go through these one by one.
The first parameter is arguably the most important parameter which is the tag name of “Schedule”, notice the capital S. This is the tag key that identifies which instances will be stopped and started by this solution. You can change this default value to whatever you want to tag your instances with, just make sure you remember that you changed it. For now, I’ll leave it as the default.
Then I can choose if I want to schedule EC2 instances, RDS instances, or both. I’ll stick with EC2 for this demo, but if you are choosing to stop RDS instances, you have a few more options. For example, if you’re using the Aurora engine, you’ll need to select this option to stop your Aurora instances. Additionally, you can create a snapshot for your RDS instances before you stop them, which gives you a backup of your data just in case. However, note that you can not snapshot Aurora databases with this tool.
You can disable scheduling temporarily if you don’t want your Lambda function to be polled until you’re ready. You can always update the stack and change this parameter later if you’d like. I want scheduling enabled, so I’ll leave this on.
Then you list all Regions you’d like the tool to run in. For example, if you had instances in us-west-2 and us-east-1 that you’d like to stop or start, you could type in us-west-2 and us-east-1. In my case, I’ll leave it blank, since I’m only stopping and starting instances in my current selected Region, which is us-east-1.
You can select which Time Zone you’re in. The default is UTC. I’m in the US/Pacific time zone, so I’ll scroll down and select US/Pacific.
If you’re setting up the instance scheduler to manage instances across multiple accounts, you’ll want to input the IAM role ARNs that have access to those accounts here. You can also list multiple accounts here by using a comma. I’m not using cross-account scheduling, so I can leave this parameter blank.
You can additionally prevent the scheduler from stopping and starting instances in the current account if you’re using a multi-account approach. This would work well if you have separation of duties for your accounts. This way, the instance scheduler is launched in one account, while it stops and starts instances in another account.
Then, you choose a frequency for how often the instance scheduler function runs, in minutes. This is the number of minutes that pass before CloudWatch invokes the function again, and the function checks to see if it needs to perform work, such as stopping and starting your instances. For example, if I choose 15 minutes, the function will perform stop-and-start actions every 15 minutes. To avoid throttling, it’s recommended to use a high number for frequency if you have a lot of instances. However, I don’t, so I’m going to stick with the default of 5.
From there, you can choose how much power to provide to your Lambda function, by specifying the memory. If you have a lot of instances, it’s recommended to increase the amount of memory for your function.
Then, you specify if you want CloudWatch metrics and logging. By enabling metrics, it does increase the cost of this solution, so keep that in mind.
You can also use AWS Systems Manager Maintenance Windows with this tool. Maintenance windows are used to set a schedule for tasks to be performed on your instances. If you want to use these schedules to stop and start your instances, you can do that - but you will need to select yes to enable SSM maintenance windows for the stack.
I can choose how long I want to retain logs, in days.
And I can additionally automate adding extra tags to instances that I start and stop. For example, If I wanted to tag an instance with its state information, I could automatically add those tags in these fields. When I start the instance, I can add the tag “state=started” and when I stop the instance, I can update this field to be “state=stopped”.
From there, I’ll click next, next again. Then I’ll acknowledge that my cloud formation template will be creating IAM resources, and then click create.
From there, I can monitor the upload of the stack, and ensure that my resources are created properly.
Once all of my resources have been successfully created, I can view them in the console by clicking the resources tab. This will provide a link to all the resources that my CloudFormation template created, such as the main Lambda function, the CloudWatch event and log group, the IAM roles and policies, and DynamoDB tables.
That’s it for this one - I’ll see you next time!
Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.