image
Environment Tiers
Start course
Difficulty
Beginner
Duration
45m
Students
8796
Ratings
4.7/5
Description

AWS Elastic Beanstalk can help you deploy and scale your applications and services with ease and without you having to worry about provisioning components and implementing high availability features such as elastic load balancing and auto-scaling.  All of this and more is managed and handled by Elastic Beanstalk, and this course is designed to take you through those features.

Learning Objectives

The objectives of this course are to provide you with:

  • The ability to explain what AWS Elastic Beanstalk is and what it is used for
  • The knowledge of the different environments that Elastic Beanstalk provides allowing you to select the most appropriate option for your needs
  • An explanation of how to configure the service and some of the parameters that you can alter to meet your application requirements
  • The knowledge of the different monitoring options available for assessing your environment and resources health

Intended Audience

This course would be beneficial to those who are responsible for the development and deployment of Web Applications within your AWS environment.  Also, for those who would like to gain a greater understanding of deployment options in AWS and anyone looking to take the Developer certifications with AWS.

Prerequisites

Familiarity with the following AWS services would be beneficial to get the most out of this course, but it is not essential for a thorough understanding of AWS Elastic Beanstalk:

  • Amazon Route53
  • Elastic Load Balancing
  • Auto Scaling
  • EC2

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Hello and welcome to this lecture that will cover the two different environment tiers that AWS Elastic Beanstalk uses to provision and build your application within. 

As a quick recap, the environment tier reflects on how Elastic Beanstalk provisions resources based on what the application is designed to do. So if the application manages and handles HTTP requests then the app will be run in a web server environment. If the application pulls data from an SQS Queue then it'll be run in a worker environment. When you come to set up your configuration template for your application, you will decide which environment to select based on its use case. Let me now run through the details of both of these so you can establish the differences between the two. 

The web server tier. The web server environment is typically used for standard web applications that operate and serve requests over HTTP port 80. This tier will typically use the following AWS resources in the environment. Route 53. When an environment is created by Elastic Beanstalk, it has an associated URL as the one shown on the screen. Using the CNAME record in Route 53, this URL is aliased to an Elastic Load Balancer, an ELB. If you need more information on Route 53, please see our existing course here. Elastic Load Balancer. For every environment, you should have at least one ELB sitting in front of your EC2 instances that is referenced by Route 53 as just explained. This ELB will also integrate with an autoscaling group. Auto scaling. Again, for every environment, you will also have an auto scaling group that will manage the capacity planning of your applications based on the load received. As and when required it will both add and remove EC2 instances to ensure your application meets the demands of its users. EC2 instances. Within the environment, AWS Elastic Beanstalk will create a minimum of one EC2 instance to run your application. This EC2 instance will be a part of the auto scaling group. Security groups. Your EC2 instances will be governed by a security group, which by default will allow port 80 open to everyone. An important component within the environment is actually installed on every EC2 instance provisioned. This component is the Host Manager. 

The Host Manager has a number of different key and fundamental responsibilities. And these include to aid in the deployment of your application, it collates different metrics and different events from the EC2 instance which can then be reviewed from within the console or via the AWS CLI or API, it generates instance-level events, it monitors both the application log files and the application server itself. It will also patch instance components, and finally it will manage the log files, allowing them to be published at S3. Let's now take a look at the worker tier. 

The worker environment is slightly different and are used by applications that will have a backend processing task, which will interact with AWS SQS, the Simple Queue Service. This tier typically uses the following AWS resources within the environment, an SQS Queue. If you don't already have an SQS Queue operational and configured, then as a part of the creation of the worker environment AWS Elastic Beanstalk will create one for you. An IAM Service Role. To allow your EC2 instances to monitor queue activity in the SQS Queue, each EC2 instance will have an associated instance profile role which contains a section within the policy as shown. Auto scaling. An auto scaling group is created to ensure that performance isn't impacted based on load. EC2 instances, a minimum of one EC2 instance is used and is attached to the auto scaling group, and each EC2 instance in the environment will read from the same SQS Queue. Whereas the web tier used the Host Manager to perform some key tasks for Elastic Beanstalk, instead within a worker tier a Daemon is installed on every EC2 instance to pull requests from your SQS Queue. It will also then send data to the application allowing it to process the message. This is why the instance profile role is required to ensure permissions are given to read from a queue. As you can see, there are clear differences between the two tiers, and it's likely that you will use the two tiers in conjunction with each other, decoupled by the use of the simple queue service, allowing each environment to scale independently of one another depending on demand through auto scaling and Elastic Load Balancing. 

One final point before I finish this lecture, I want to make you aware that should you have the need for additional customization over what is provisioned by the service itself within these environments, then you can develop and add your own Elastic Beanstalk configuration files within your application source code. These are either written in a YAML or JSON based format. And these customization files need to be saved within the .config file extension and then stored within the .ebextensions folder of your source code. 

That now brings me to the end of this lecture covering the environment tiers used by Elastic Beanstalk. Next, I want to provide you with an understanding of the different deployment options that are available.

About the Author
Students
236921
Labs
1
Courses
232
Learning Paths
187

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.