Amazon ECS Service
The course is part of these learning paths
This course is an introduction to the Amazon ECS Container Service (ECS). ECS is a highly scalable, high-performance container management service that supports Docker. This course will provide a detailed introduction to what ECS is and how to get started using it. ECS has built-in support for many Amazon EC2 services and also allows you to customize parts of the infrastructure to meet your application-specific needs. This course will also provide a brief overview of the rich ecosystem that is developing around EC2 including continuous integration, scheduling, and monitoring.
This course is for developers or operation engineers looking to deploy containerized applications on Amazon EC2. Experience with container technology (e.g. Docker) or Amazon EC2 would be helpful but is not required.
- Describe the concepts around running Docker containers on Amazon EC2.
- Run and configure containers on EC2
- Understand the ecosystem around EC2 Container Service (ECS) to help guide next steps
This Course Includes
- Over 45 minutes of high-definition video
- Hands-on demo
What You'll Learn
- Course Intro: An introduction to what we will be covered in this course.
- EC2 Overview: In this detailed overview we’ll cover task definition, resource allocation, service definition, capacity, load balancing, scheduling, cluster configuration and Security
- EC2 Demo: A hands-on demo of the EC2 service.
- AWS Related Services: In this lesson, we’ll go through ELB, EBS, and IAM.
- Ecosystem: In this lesson, you’ll learn about third-party applications and services ecosystems
- Summary: A wrap-up and summary of what we’ve learned in this course.
Hello, and welcome back to Introduction to Amazon EC2 Container Service, ECS. In this lecture, I'll cover ECS, in a hands-on way to demonstrate how you can easily get started working with this managed service provided by Amazon. This lecture will be broken up into two topics. The first short topic will be Docker and EC2 Variations, and a second topic will be the Detailed Walkthrough of ECS.
Briefly, before we get started with the walkthrough, it's important to mention, that there are several ways one could run Docker on Amazon EC2 without using ECS. One option, would be to use Docker out of the box by installing it on a regular compute instance on EC2. In this case, you would be using all the existing Docker tools. For example, you would use docker-machine, docker-swarm, and docker-compose, alongside the standard docker command. This could be an option if you're already familiar with Docker and don't want to learn anything about the ECS way of doing things. In other words, you would be using Amazon for it's infrastructure as a service, IaaS capability, while letting the existing Docker tools do all the work.
Another option, would be to use Amazon's Elastic Beanstalk service to deploy Docker application's onto Amazon EC2. This is a great option if you're already familiar with Elastic Beanstalk, and want to use the application delivery features of Beanstalk to deploy your existing Docker applications. Beanstalk, has official support for Docker. Beanstalk, manages the load balancing and scaling, of the Docker packaged application. The Beanstalk option, gives you a lot of flexibility in terms of the application software stack. For example, Java with Glashfish or Python with uWSGI. In this use case, you would be using Amazon for its Platform as a servicecapability with Docker, as the application packaging. Yet, another option, would be to run Docker Datacenter, , for AWS.
This option, was created by Amazon Web Services and Docker teaming up to create a Containers as a serviceoption. This is an option for those who want to get into the Docker or Container space with a Docker supported enterprise product deployed on Amazon's cloud. This may be one of the most out of the box Docker on AWS solutions available. DDC would handle the container orchestration in this case. Believe it or not, there's yet another variation. Docker has created a project called Docker for AWS, and provides what Docker refers to as a fantastic, out of the box solution, on AWS.
The target audience for this project is Engineer's who like what Docker has done in terms of usability, and want a similar experience on EC2, without having to learn the details of EC2. Docker bootstraps that functionality, it integrates into the native EC2 services. For example, integrating into CloudFormation for software stock management and CloudWatch for locking. Docker, even goes so far as to optimize the Linux distribution to run well on AWS. This is a more recent option, but certainly one to keep an eye on. Now, for the hands-on portion of this course, as we focus specifically, on the Amazon EC2 Container Service. Notice, that ECS is a Docker on EC2 solution that is fully baked into AWS.
If you happen to be familiar with AWS and EC2, you are likely aware that the AWS suite of applications can be accessed in several ways. For example, through the AWS web console, with the AWS command line tools, and with API's and SDK's. Amazon EC2 Container Service meets that expectation as well. For this course, we'll focus exclusively on the web console and AWS command line tools. Without further adieu, let's dig into ECS. First, pause the video, and login to your AWS console.
I'll be here when you've done that, and start playing the video again. From the AWS console, go ahead and click on Services, in the top left corner. Under Compute, click EC2 Container Service. When you open up the ECS service for the first time, Amazon has a walkthrough on how to setup a sample application to get you hands-on with the service. You can also find a link in the transcript that'll let you get back to the ECS first run if you would like to run it again. So let's go ahead and go through this together. Click the Get started button first, when we open ECS, we'll leave the following boxes checked: Deploy a sample application on to Amazon ECS Cluster, and also Store container images securely with Amazon ECR. ECR is the EC2 Container Registry, or your private Docker hub hosted on Amazon.
The first step is to configure your private EC2 Container Registry. Enter a name for your registry. For this example, I'm going to call it cloud_academy_demo, and then click Next step. This next step is going to require us to setup or use an existing, Docker installation to push an image to this Container Repository. Let's do that quickly now. I won't explain the details of this, but you can get the Docker specific details from another course on Docker, such as the one offered from Cloud Academy.
So, for this example, I'll connect to an Amazon EC2 ubuntu instance, that I've previously created. We'll do a couple things to get it setup. First, we'll install Docker, and then, since we'll also need the AWS CLI package to do some stuff later, we'll install that too. The Docker website has documentation for installing Docker on various operating systems. So you can go there to get instructions as needed. For my ubuntu Amazon instance, just a few commands to install the package repo, and then, let's install the Docker package with apt. And, for installing the AWS CLI tools, the AWS documentation recommends that we use pip, and here's the command for that.
We also need to put the bin directory in the path, and source the .bash_profile before we can run any AWS commands, we need to run aws configure. For proof of concept, you can get your security credentials from your AWS account settings. The best practice though, is to setup an IAM user. This is outside the scope of this course. Go ahead and run AWS configure, to configure the AWS CLI tools appropriately for your situation, and now, let's create a simple Docker file. And now, let's run the rest of the commands as outlined in the ECS CLI walkthrough. The output from this first command will give us what we need to login to the Container Registry that we just created. Let's copy that output and run it.
Now, let's build the image, and now, let's tag it, so that it's ready to be uploaded to where the example expects the image to be in the repository, and finally, let's push it to our Container Registry, and now, we can go to the next step. I've setup the image as an nginx Container which exposes port 80. So we accept a default port mapping for port 80 and also the rest of the defaults. We'll dig in a bit more on these details once we have this up and running, and we can click Next step.
For this example, let's say cushion number of tasks to two, and also, enable the Elastic Load Balancing feature, again, on port 80, for our web server. Notice here that the IAM role is created automatically. Now, we'll leave the cluster name as default for this example. We need to update the number of instances to two, and I have a Key pair setup already, so I'll use that. We'll accept the rest of the defaults and then, let's click Review & Launch. A quick review that this is all the settings that we have configured and the settings that we want, and click, Launch Instances & run service.
This step will take some time, and we can scroll through and see the progress, and also get a quick feel for what ECS is setting up for us. You may have to scroll up to see the top of the output. Notice for example, clusters and task definitions, and instances, and gateways, and virtual private clusters, and route tables, and subnets, and security groups, and scaling groups, and launched configurations, and load balancer settings. Stepping ahead in the video, and now it's finished, and we can click on View service. From this screen, we can drill down and see the details of what has been setup for us.
Let's first look closer at a task. We can drill down into a task by clicking on it's ID, and then expand the menu next to the sample app, and here we can see the IP address that has been assigned to our Container. We can open that link in a web browser and see nginx is working. Let's navigate back to our cluster and drill into the ECS Instances tab, and click on the Container Instance to bring up the instance's details and get the IP address. We can use the IP address to ssh into one of the Container Instances, running our Docker Containers. Notice, that when we ssh, we use ec2-user as the username, because ECS created the instance from a ECS optimized AMI.
Also notice, that I'm using the key pair that I setup and copied to this server previously. Now that we are in the Container Instance, let's run docker ps. Notice that one of our nginx Container's is running here, and also the ecs-agent, which is running a Container as well. The ecs-agent coordinates the EC2 instances with the rest of the backing container orchestration services. Our other nginx Container, is running on the other Container Instance.
Let's step back and understand what we've got here, so that we can appreciate what ECS provides. This will also help you to consider possible next steps of things you might want to dig deeper into in other courses or documentation. To help understand, we'll go back to our ubuntu system that we installed the AWS CLI tools on. There's a ECS subcommand that we'll look at. You can get the full command reference in the AWS docs, but I'll show you a few useful commands that'll give us a broader view of what's here and the types of objects we can manage.
First, we need to make sure that we have our credentials configured and also our EC2 region configured to correspond to where we are running ECS. I'm running in us-east-1, so that is what I have set for my default region. Notice that it is us-east-1, the region, and we don't specify the 1b or 1a, the availability zone. To add our most entity is the cluster. We do aws ecs list-clusters. We see our default cluster. We next do, aws ecs describe-clusters, --clusters default, and we can see the details here.
Notice that it shows the status as active, and it shows the counts of pending and running tasks. It also displays the number of active services in this cluster. So let's go one level in and look at the services with aws ecs list-services. We see our sample app service there, and can copy that name, to feed to the aws ecs describe-services --services, service name command. And now we see the details of the service that we configured. From this output, we can see the specifications for the task definition, and load balancers. We also see various other constraints and residual options that we could set.
Next, let's follow the same pattern to look at task definitions, task, and Container Instances. So we run aws ecs list-task-definitions, and we get the task definition name and version number. Version one, since this was our first version of this particular tasks definition, we can use that to feed to aws ecs describe-task-definition
- -task definition, passing the task definition name: version. Notice the low level details about the task Container that we created for our nginx example. For example the image, the port, and other compute resources. Now, let's get the output of aws ecs list-tasks. We'll copy the identifier to fit into the aws ecs describe-tasks --tasks command. Notice, at the task level, we get state information. So the task is instantiation of the task definition. The task information is things like, how it is running, and what's going on? The task definition define the properties of the task and how to set it up. Finally, let's get the output of aws ecs list-container-instances. Again, we get the identifier, and feed it to aws ecs describe-container-instances
- -container-instances command. Notice here that, not only do we get Docker specific information that we would expect, but we also get the EC2 specific information that is associated with this EC2 instance. Which again, is part of our Docker cluster and meant to contain the Docker Containers that do the work of the cluster.
Now that we've gotten some hands-on experience. We'll step back even more in the next lecture to see how the AWS services play into this and the AWS specific benefits that we get by running tightly integrated with AWS.
Todd Deshane is a Software Build Engineer at Excelsior College. He previously worked for Citrix Systems as a Xen.org Technology Evangelist. Todd has a Ph.D. in Engineering Science from Clarkson University and while at Clarkson he co-authored a book called Running Xen also published various research papers related to virtualization and other topics. Todd is a DevOps advocate and is passionate about organizational culture.