The course is part of this learning path
This course covers microservices and how a microservices-based architecture differs from a monolithic architecture. It also explains what containers are and how you can create your own and then run them in AWS using the Elastic Container Service (ECS).
Learning Objectives
- Learn about microservices and the benefits that a microservice-based architecture or application has over a monolithic application
- Understand the difference between containers and virtual machines
- Learn how to create your own containers and run them within ECS and deploy them either serverlessly or on ec2 instances
Intended Audience
- Solutions architects, developers, or anyone interested in learning about containerized workloads within the AWS cloud
Prerequisites
To get the most out of this course, you should have a decent understanding of cloud computing and cloud architectures, specifically with Amazon Web Services.
Hello, my name is Will Meadows and today we will be talking about Amazon ECS, the Elastic Container Service. We will also touch on microservices, as well as an introduction to containers themselves.
If you have any questions about anything I cover in this series please let me know at will.meadows@cloudacademy.com
Alternatively, you can always get in touch with us here at Cloud Academy by sending an email to support@cloudacademy.com and one of our cloud experts will reply to your question, concern, or comment.
I would recommend this course for any solutions architects, developers, or generally curious people, who are interested in learning about containerized workloads within the AWS cloud.
In this course, you will learn about microservices and will be able to explain the benefits that a microservice-based architecture or application has over a monolithic application.
You will also learn the difference between containers and virtual machines.
Additionally, you will be able to create your own containers and run them within ECS and deploy them either serverlessly or on ec2 instances.
You should have a decent understanding of cloud computing and cloud architectures, specifically with Amazon Web Services.
It would be helpful to know about Docker and other container technologies, but that is not required.
Feedback on our courses here at Cloud Academy are valuable to both us as trainers and any students looking to take the same course in the future. If you have any feedback, positive or negative, it would be greatly appreciated if you could send an email to support@cloudacademy.com.
Please note that, at the time of writing this content, all course information was accurate. AWS implements hundreds of updates every month as part of its ongoing drive to innovate and enhance its services.
As a result, minor discrepancies may appear in the course content over time. Here at Cloud Academy, we strive to keep our content up to date in order to provide the best training available.
So, if you notice any information that is outdated, please contact support@cloudacademy.com. This will allow us to update the course during its next release cycle.
The problems of a monolithic application.
To start off this lecture, I want to speak briefly about the problems of the monolithic application, and what we can do about it.
For many years the software industry had been creating large, monolithic applications that were built with every feature tightly coupled together. It was natural to create the systems all at once and build them as a unit instead as individual components.
This idea worked fairly well for a time, but as the pace of business increased, and the size of the applications began to bloat, large problems started to arise.
Imagine we had an online forum website where users could: create accounts, post new updates, comment on those posts, and had an internal chat feature. It would be totally conceivable that this website might have written all of these components as one single service.
Now imagine if we started to have many people posting new content all at once, we would be forced to scale the entire application instead of the one critical piece. This would cause waste, as we would be adding more power everywhere, instead of just where it was needed.
These days we would want to separate out these features into individual services. We would want to decouple our monolithic application, into smaller applications, or microservices as it is commonly referred to.
Each microservice does one thing well and doesn't need to know too much about the rest of the services involved. The microservices are supposed to be accessed by a clearly defined API so they can be mixed and remixed into various applications as needed. They're also managed and updated independently.
With these thoughts in mind, we can start to introduce the idea of containers.
What is a container?
A container is a fully encapsulated software package that contains all the code, and its dependencies ( your frameworks, binaries, configurations ) that are required to run an application.
These containers are hosted on a server running a container orchestration service. The container orchestration service can create new containers by referencing an initial image that holds all of the dependency information referred to earlier.
By containerizing your software, you provide a way to repeatedly deploy working code, across many platforms, that does not depend on the underlying system for its needs.
This allows your application to be platform agnostic, meaning that if you develop something that works in your environment, it will work in any environment that supports your container engine of choice.
A container engine is what actually deals with the orchestration of your containers. Helping to spin them up and shut them down when needed.
Now, this might sound a lot like what a virtual machine does, but there are a few important differences.
A virtual machine, or VM, contains a guest operating system - Windows or Linux, within its package. Your application sits on top of that operating system and can use its features as necessary.
The VM can contain all the dependencies your software needs, and in general, is very self-contained (similar in a way to a container). However, since we are storing the guest operating system within this package, the final product can be gigabytes in size.
With the virtual machine being so large, loading a VM can take quite some time. When a virtual machine is starting up, it must wait for the guest operating system to initialize before it can start running your application.
These VMs sits on top of a hypervisor, which is in charge of orchestrating all the VMs within its care ( similar to the container engine). The hypervisor ensures that all the VMs gets roughly equal time to interact with the underlying hardware ( A.K.A helping them use the CPU, Memory, and disk storage).
As an aside: When talking about EC2 instances and their relative sizes: medium, large, 2xlarge - what is effectively happening under the hood is that the instance ( that VM ) is given more or preferential CPU, RAM, and Networking time.
Now, getting back to the crux of the issue. The key difference between the VM and a container is the location of the operating system. Since the container does not include the operating system the application will be running on, they are incredibly small.
A container can initialize and start working within seconds of deployment. And when you have scenarios where speed and agility is paramount, containers start to really look like an attractive option.
You could imagine our example from earlier, the forum website that had users, posts, comments, and a live chat; that each of those components could be broken apart from the monolithic program, and placed into a container.
These individual pieces of the whole application, our microservices, could then be scaled up and scaled-down based on their demand. And since they do spin up so rapidly, we gain great speed and agility when dealing with scaling-related issues.
How do we use containers within AWS?
Containerized workloads offer a great number of benefits for those working in the cloud. One of the more difficult parts of using containers however is getting everything set up in the first place. You could build and deploy everything from scratch yourself, but that takes a lot of time and know-how to get working. This is where AWS is able to step in and has created a service just for that purpose.
The Amazon Elastic Container Service is a fully managed container orchestration platform that can help with deploying, managing, and scaling your containerized workloads. It does all of this using Docker containers, which are an industry standard, and offer a lot of great quality of life features.
For a more in-depth discussion about docker: including how to install the development tools, docker architecture, networking, and how to actually create the docker containers themselves - please check out this lecture.
Anyways, Amazon ECS has two ways of deploying your containers. The first and most traditional way is to have ECS create a cluster of ec2 instances that will all have the Docker Engine installed on them.
ECS will help you manage the provisioning of the cluster and deal with its autoscaling needs. It will increase and decrease the number of instances within the cluster based on the number of containers required.
In general, you will have multiple containers deployed onto each EC2 instance. Each container will require a certain amount of memory and CPU overhead to cover its needs, and It will do its best to stay within that footprint. The cluster will scale when there is no more available space within the instances to deploy another container onto them. This can happen when we have run out of operating memory, or CPU overhead.
The second way that ECS allows you to deploy containers is to do so in a serverless manner. There will be no instances that you have to worry about, you just need to set up the container itself.
ECS uses a sister service called AWS Fargate which helps manage your serverless containers. These containers have all the same power and functionality as their servered alternative. The main difference is that these containers are not always on, per se, they only exist when you invoke them, and turn off when they are done with their task.
There are many things you need to consider when determining which method (serverless or servered) is best for you, and I will cover those later. For now, let's look at the workflow required to use our containers with ECS.
Amazon Elastic Container Registry
First things first, we need a way to create and store our containers for use in ECS. Amazon has created the Elastic Container Registry (ECR) which is a fully managed Docker container registry that is built into Amazon ECS.
The registry helps with hosting your images in a highly available and scalable way. This service is fully featured and provides robust security for your docker images. With ECR you also have the ability to share your container images and artifacts privately or publicly as you deem fit. Developers can even provide their containers to the public for worldwide discovery and download through the Amazon ECR public gallery.
Creating your registry is very simple and once completed will provide you with a basic URI that looks something like this:
278915031458.dkr.ecr.us-west-2.amazonaws.com/ca-container-registry
This is the location we can begin to post container images, for use with the elastic container service.
And as you go about developing your containers for your applications, Amazon ECR has the ability to configure lifecyle policies for those container images.
You probably won't need to keep more than a handful of your past images at any one time, so you can remove past versions after a certain number has been exceeded. This is configurable and is totally up to you.
In order to run our container on ECS we also need to define some task definitions so that ECS understands the basic requirements of your containers. Task definitions tell the ECS which Docker images to use for the container instances, what kind of resources to allocate, network specifics, and other details.
Examples are configuring things like memory limits, port mappings, and setting the names and images for the containers that are being deployed.
When building a task definition there are slight differences between creating one for the serverless workflow (using AWS Fargate) and creating one for the servered workflow (ECS Clusters), but otherwise, they are very similar.
We define which container image we want this task to use by referencing the URI from our Amazon Elastic Container Registry repository from before:
278915031458.dkr.ecr.us-west-2.amazonaws.com/ca-container-registry:testblue
With that all set up, you will be able to start deploying your tasks either to an EC2 cluster, or to AWS Fargate - the serverless container service. Unfortunately, you would have to do that deployment by hand unless we create a Service.
ECS Services
An ECS Service allows you to manage, run, and maintain a specific number of task definitions within an Amazon ECS cluster. When you create an ECS service it helps re-deploy your tasks should any fail during their life cycle. ECS services are an integral part of having self-healing containers.
Another key benefit of creating and using ECS services is that it allows you to place your containers behind an application load balancer. The application load balancer operates on layer 7 ( HTTP/HTTPS) of the OSI model. We can place our ECS service within a target group and have our traffic load-balanced between our containers.
Since we are using the application load balancer, this also means we can access different versions of our containers through the URL of our webpage.
Remember the microservice and monolithic forum website from earlier? In that example, we had the ability to create accounts, post new updates, comment on those posts, and had an internal chat feature. Well, each of those could be broken down into their own containers, and put within their own services, and could be accessed through the Application load balancer as follows:
Example.com/api/CreateAccount
Example.com/api/PostNewUpdates
Example.com/api/Comment
Example.com/api/Chat
Pretty neat...
If you want a hands-on example of all the stuff we just covered please take a look at these two labs.
This first one shows you how to use ECS for Blue-green deployments, and will go over creating the registry for your containers, building the docker images, creating a cluster, creating tasks, and deploying everything.
And this lab is very similar to the first, but will do so using an AWS Fargate deployment ( the serverless side of Amazon ECS).
At this point, you are probably still wondering when you would choose to deploy for ECS (servered) or Fargate (Serverless). Well, let's hop into that now.
When should I use severed vs serverless containers?
This is a conversation that is important to have early on when designing a container-based architecture. There are a few things that can help to guide you in making a decision about which avenue is best for you.
Cost
In general, serverless compute is more expensive than servered compute in a 1 to 1 environment. Five minutes of raw serverless will cost more than five minutes of raw servered power (if everything is equal).
So you would assume that it is always better to pick a servered option... Well, it's complicated..
What I said earlier is accurate only as you maximize the total used resources on the source machine that is running your containers. Let’s say that all your containers running on ec2 instances only used up 50% of the CPU and memory of the underlying instance. In this scenario, you would probably be better off running on a serverless workload. In fact, it would be almost 40% more cost-effective to use Fargate (the serverless option). This is because serverless is always running at 100% effective CPU and memory utilization.
Optimizing your instance-based containers can take a lot of fiddling, and trial and error to really maximize your efficiency. But if you were able to bring that average CPU and memory reservation up to 100% you would be 20% more cost-efficient using a pure EC2 based approach (the servered option)
For an in-depth look at where I’m pulling this data from, please take a look over here.
Where Amazon has done all the hard work in providing data for the cost side of this equation.
Let's look at another scenario. If you have traffic that is very spiky, quickly going up and down, or just comes in at unpredictable times, and can have long periods of low or dead time - this is another situation where serverless wins.
When dealing with this kind of traffic with an ec2 cluster, you still have to pay for all of that dead time. It still costs money to keep the cluster running when nothing is happening, even if you do autoscale down. Because you should always have a minimum number of instances ready with containers available for when that traffic does strike.
So to summarize the cost side of things, if you have a workload that can support long-running operations, that fully utilizes the CPU and Memory of the underlying instances - A servered deployment is probably the best course of action cost-wise.
Maintenance
One of the great upsides to using ECS is that you are in full control over the environment that your containers reside in. You also have the ability to control the types of instances that your containers run on with greater detail ( like choosing network optimized or GPU optimized instances). However, the downside to all of that is you are also responsible for the upkeep of all these items. This includes patching of the instances, dealing with networking, and the overall security of the entire cluster.
Fargate in general will allow you to focus on the development and maintenance of just the application itself, instead of all the above. AWS takes care of everything else for you.
Networking Options
Another point of contention for someone who is trying to determine which option to choose is the availability of networking options between ECS and Fargate. With ECS you can choose between three networking modes:
-
Host Mode - Which ties the networking of the container directly to the underlying host that's running the container.
-
Bridge Mode - Where you're using a virtual network bridge to create a layer between the host and the networking of the container. This way, you can create port mappings that remap a host port to a container port. The mappings can be either static or dynamic.
-
AWSVPC Mode - With the awsvpc network mode, Amazon ECS creates and manages an Elastic Network Interface (ENI) for each task and each task receives its own private IP address within the VPC. This ENI is separate from the underlying host’s ENI. If an Amazon EC2 instance is running multiple tasks, then each task’s ENI is separate as well.
However, with AWS Fargate, you are forced to use just AWSVPC mode. The good news is that AWSVPC mode covers a wide range of solutions - but if you needed a more specialized networking mode for your containers, you might be stuck with using ECS over Fargate.
Wrap up
Working with containers and Amazon ECS can help you create a more robust application. We can do this by separating our monolithic programs into separate microservices that each can deal with a specific task well.
These tasks can be placed within containers (which house all of the code, the dependencies, and everything your task needs to operate) and are designed to be quickly deployed and repeatable in all environments.
Our container images can be housed within the Amazon elastic container registry - where they can be maintained with lifecycle policies. Each image has a unique URI that allows you to use them for deployment within ECS. Additionally, you have the option to share your containers with the world or to see other people’s containers through the Amazon ECR public gallery.
Containers that are managed with ECS can be placed on EC2 instances which allow you to define the underlying infrastructure. This is good for when you have complicated networking setups that you need to manage. However You also have the option to place them within a serverless framework (using AWS Fargate) losing some direct control, but gaining oversight from AWS.
Choosing between serverless and servered containers is not an easy task, yet there are some distinct cost advantages associated with the serverless framework. Servered containers are only more cost-effective than serverless when you are able to utilize the entire capacity of the underlying instance.
Well, that's all I have for you in this lecture. I highly recommend you check out our labs relating to ECS - both the serverless and the servered ones. They will really help to drive all of these points home. My name is Will Meadows and I'd like to thank you for spending your time here learning about Amazon ECS, Containers, and that little bit about microservices. If you have any feedback, positive or negative, please contact us at support@cloudacademy.com, your feedback is greatly appreciated, thank you!
William Meadows is a passionately curious human currently living in the Bay Area in California. His career has included working with lasers, teaching teenagers how to code, and creating classes about cloud technology that are taught all over the world. His dedication to completing goals and helping others is what brings meaning to his life. In his free time, he enjoys reading Reddit, playing video games, and writing books.