AWS Compute Fundamentals
AWS Elastic Beanstalk
The course is part of this learning path
This course provides detail on the AWS Compute services relevant to the Developer - Associate exam. We shall be looking at Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda.
- Understand when you use Amazon EC2
- Learn about the components of Amazon EC2
- How to create and deploy EC2 services
- Understand what EC2 auto scaling is
- Be able to configure auto-scaling launch configurations, launch templates, and auto-scaling groups
- The ability to explain what AWS Elastic Beanstalk is and what it is used for
- The knowledge of the different environments that Elastic Beanstalk provides, allowing you to select the most appropriate option for your needs
- An explanation of how to configure the service and some of the parameters that you can alter to meet your application requirements
- The knowledge of the different monitoring options available for assessing your environment and resources health
- Be able to explain what AWS Lambda is and what its uses are
- Define the components used within Lambda
- Explain the different elements of a Lambda function through its creation
- Understand the key differences between policies used within Lambda
- Recognize how event sources and event mappings are managed for both synchronous and asynchronous invocations
- Discover how Amazon CloudWatch can monitor metrics and logs to isolate issues with your functions
- Learn how to check for common errors that might be causing your functions to fail
Hello and welcome to this short lecture which will discuss a number of options that are available for your Elastic Beanstalk application deployments. Having an understanding of the different options that are available enables you to efficiently manage your infrastructure. Now Elastic Beanstalk provides the following deployment options. All at once, rolling, rolling with additional batch, and immutable. Let's assume you already have an environment created with an application deployed across a number of instances that are being managed by Elastic Beanstalk. With that in mind, let me explain how each of these deployments work, starting with all at once.
All at once deployment is the default choice if you don't specify any other option. If you needed to update your application within your Elastic Beanstalk environment. Using the all at once option will simply roll out the application to your resources all at the same time. And this would, of course, cause a disruption to your application while the update was in progress, which would, in turn, affect your end users.
Rolling, with a rolling deployment you are able to minimize the amount of disruption that is caused when using the all at once approach. When performing a rolling deployment, Elastic Beanstalk will deploy your application in batches, performing an update to just a portion of your resources at a time. Once the update is complete it will then perform the update on the next batch. This would mean that you have two different versions of the application running at the same time for a very short period. However, it also means that you can still serve requests and process information through your application while the deployment is gradually rolled out for your infrastructure.
Rolling with additional batch. The rolling with additional batch is much the same principle as rolling. Your environment is updated in batches until all your resources have the new update. However, with rolling deployments there is an impact to your available resources while the update is being applied. For example, let's say during a rolling deployment Elastic Beanstalk creates four different batches. While the update is being applied, your application availability takes a 25% hit while one batch is being updated. With rolling with additional batch, Elastic Beanstalk adds another batch of instances within your environment to your resource pool to ensure application availability is not impacted. So in this case, your deployment would have five batches. And while the update was being applied to one of your existing batches, you would still have four remaining batches of instances maintaining operation. On completion of the update to all batches, Elastic Beanstalk would then terminate this additional batch of instances.
Immutable, immutable deployments will create an entirely new set of instances in parallel to your existing and older resources. These new instances will be served through a temporary autoscaling group behind your elastic load balancer. This means for a short period of time your environment would essentially double in size. However, once your new instances are deployed and have passed all the health checks the old environment would be removed and the autoscaling group updated. If your health checks fail for the new instances then Elastic Beanstalk would terminate them and delete the autoscaling group and traffic would continue to be served to your original environment.
That has now brought me to the end of this lecture. Coming up next I will provide a demonstration on how to configure Elastic Beanstalk.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.