1. Home
  2. Training Library
  3. Compute (DVA-C01)

Environment Tiers

Contents

keyboard_tab
AWS Compute Fundamentals
1
What is Compute?
PREVIEW1m 49s
Amazon EC2
2
Amazon EC2
PREVIEW28m 26s
Start course
Overview
Difficulty
Beginner
Duration
2h 7m
Description

This course provides detail on the AWS Compute services relevant to the Developer - Associate exam.  We shall be looking at Amazon EC2, AWS Elastic Beanstalk, and AWS Lambda.

Want more? Try a lab playground or do a Lab Challenge!

Learning Objectives

 

  • Understand when you use Amazon EC2
  • Learn about the components of Amazon EC2
  • How to create and deploy EC2 services
  • Understand what EC2 auto scaling is 
  • Be able to configure auto-scaling launch configurations, launch templates, and auto-scaling groups
  • The ability to explain what AWS Elastic Beanstalk is and what it is used for
  • The knowledge of the different environments that Elastic Beanstalk provides, allowing you to select the most appropriate option for your needs
  • An explanation of how to configure the service and some of the parameters that you can alter to meet your application requirements
  • The knowledge of the different monitoring options available for assessing your environment and resources health
  • Be able to explain what AWS Lambda is and what its uses are
  • Define the components used within Lambda
  • Explain the different elements of a Lambda function through its creation
  • Understand the key differences between policies used within Lambda
  • Recognize how event sources and event mappings are managed for both synchronous and asynchronous invocations
  • Discover how Amazon CloudWatch can monitor metrics and logs to isolate issues with your functions
  • Learn how to check for common errors that might be causing your functions to fail
Transcript

Hello and welcome to this lecture that will cover the two different environment tiers that AWS Elastic Beanstalk uses to provision and build your application within. 

As a quick recap, the environment tier reflects on how Elastic Beanstalk provisions resources based on what the application is designed to do. So if the application manages and handles HTTP requests then the app will be run in a web server environment. If the application pulls data from an SQS Queue then it'll be run in a worker environment. When you come to set up your configuration template for your application, you will decide which environment to select based on its use case. Let me now run through the details of both of these so you can establish the differences between the two. 

The web server tier. The web server environment is typically used for standard web applications that operate and serve requests over HTTP port 80. This tier will typically use the following AWS resources in the environment. Route 53. When an environment is created by Elastic Beanstalk, it has an associated URL as the one shown on the screen. Using the CNAME record in Route 53, this URL is aliased to an Elastic Load Balancer, an ELB. If you need more information on Route 53, please see our existing course here. Elastic Load Balancer. For every environment, you should have at least one ELB sitting in front of your EC2 instances that is referenced by Route 53 as just explained. This ELB will also integrate with an autoscaling group. Auto scaling. Again, for every environment, you will also have an auto scaling group that will manage the capacity planning of your applications based on the load received. As and when required it will both add and remove EC2 instances to ensure your application meets the demands of its users. EC2 instances. Within the environment, AWS Elastic Beanstalk will create a minimum of one EC2 instance to run your application. This EC2 instance will be a part of the auto scaling group. Security groups. Your EC2 instances will be governed by a security group, which by default will allow port 80 open to everyone. An important component within the environment is actually installed on every EC2 instance provisioned. This component is the Host Manager. 

The Host Manager has a number of different key and fundamental responsibilities. And these include to aid in the deployment of your application, it collates different metrics and different events from the EC2 instance which can then be reviewed from within the console or via the AWS CLI or API, it generates instance-level events, it monitors both the application log files and the application server itself. It will also patch instance components, and finally it will manage the log files, allowing them to be published at S3. Let's now take a look at the worker tier. 

The worker environment is slightly different and are used by applications that will have a backend processing task, which will interact with AWS SQS, the Simple Queue Service. This tier typically uses the following AWS resources within the environment, an SQS Queue. If you don't already have an SQS Queue operational and configured, then as a part of the creation of the worker environment AWS Elastic Beanstalk will create one for you. An IAM Service Role. To allow your EC2 instances to monitor queue activity in the SQS Queue, each EC2 instance will have an associated instance profile role which contains a section within the policy as shown. Auto scaling. An auto scaling group is created to ensure that performance isn't impacted based on load. EC2 instances, a minimum of one EC2 instance is used and is attached to the auto scaling group, and each EC2 instance in the environment will read from the same SQS Queue. Whereas the web tier used the Host Manager to perform some key tasks for Elastic Beanstalk, instead within a worker tier a Daemon is installed on every EC2 instance to pull requests from your SQS Queue. It will also then send data to the application allowing it to process the message. This is why the instance profile role is required to ensure permissions are given to read from a queue. As you can see, there are clear differences between the two tiers, and it's likely that you will use the two tiers in conjunction with each other, decoupled by the use of the simple queue service, allowing each environment to scale independently of one another depending on demand through auto scaling and Elastic Load Balancing. 

One final point before I finish this lecture, I want to make you aware that should you have the need for additional customization over what is provisioned by the service itself within these environments, then you can develop and add your own Elastic Beanstalk configuration files within your application source code. These are either written in a YAML or JSON based format. And these customization files need to be saved within the .config file extension and then stored within the .ebextensions folder of your source code. 

That now brings me to the end of this lecture covering the environment tiers used by Elastic Beanstalk. Next, I want to provide you with an understanding of the different deployment options that are available.

About the Author
Avatar
Stuart Scott
AWS Content Director
Students
187352
Labs
1
Courses
158
Learning Paths
115

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.