Amazon ECS Service
The course is part of these learning paths
This course is an introduction to the Amazon ECS Container Service (ECS). ECS is a highly scalable, high-performance container management service that supports Docker. This course will provide a detailed introduction to what ECS is and how to get started using it. ECS has built-in support for many Amazon EC2 services and also allows you to customize parts of the infrastructure to meet your application-specific needs. This course will also provide a brief overview of the rich ecosystem that is developing around EC2 including continuous integration, scheduling, and monitoring.
This course is for developers or operation engineers looking to deploy containerized applications on Amazon EC2. Experience with container technology (e.g. Docker) or Amazon EC2 would be helpful but is not required.
- Describe the concepts around running Docker containers on Amazon EC2.
- Run and configure containers on EC2
- Understand the ecosystem around EC2 Container Service (ECS) to help guide next steps
This Course Includes
- Over 45 minutes of high-definition video
- Hands-on demo
What You'll Learn
- Course Intro: An introduction to what we will be covered in this course.
- EC2 Overview: In this detailed overview we’ll cover task definition, resource allocation, service definition, capacity, load balancing, scheduling, cluster configuration and Security
- EC2 Demo: A hands-on demo of the EC2 service.
- AWS Related Services: In this lesson, we’ll go through ELB, EBS, and IAM.
- Ecosystem: In this lesson, you’ll learn about third-party applications and services ecosystems
- Summary: A wrap-up and summary of what we’ve learned in this course.
Hello and welcome back to the course, Introduction to Amazon EC2 Container Service. In this lecture, we'll look at some of the third party applications and services available in the ECS ecosystem. We'll break down into the following categories. Service discovery. Scheduling. Automation, which includes Continuous Integration and Continuous Delivery. And, monitoring. Service discovery is the method by which services are made available. It is a public service, such a web application.
Or privately, such as with a multitier or microservice application architecture. Service discovery as a problem has been around since the humble beginnings of the web. In the early days, service discovery was done manually, by a system administrator managing a central load balancer or a data server. Manual, or even static configure services will not scale. Especially in the context of Docker and microservices. In a microservice architecture built on Docker containers, the lifespan of a particular container maybe in the order of seconds.
So, it's availability or unavailability need to be dynamic and fast. Where it lives and how to access it needs to be easy to accomplish so that the web application can remain as highly available as possible. There are four kind of approaches to keeping track of services. Using DNS SRV service records. Using key value stores. Using load balancers. And software defined networking, SDN. DNS SRV service records are convenient because they provide not only the host and IP mapping but also the port number. This seems to be a low overhead, ubiquitous solution to service discovery. Key value store solutions for aiding with service discovery are a very common approach.
And, while there is some complexity keeping track of the state of the retrieving system a lot of sophisticated solutions use this approach. Another natural solution is to use load balancers to help keep track of services. This is a great place to keep this information because a load balancer is already involved in routing and spreading the incoming traffic. The challenge here becomes scaling load balancers and dealing with the complexity that comes with distributed load balancers.
Another more recent solution is to use software defined networking approaches. As networking in the cloud has evolved, the flexibility of software has outpaced the ability of the network hardware vendors in terms of agility. So, we see what was traditionally a hardware control plane was the software. It opened up new opportunities, ways of solving distributing system's problems in the cloud. Solutions that taken advantage of this programmable networking model have been built to focus specifically on the networking of containers to help overcome the challenges of traditional networking.
A number of software vendors, in the service discovery space, is large and it's been growing over the last several years. A few good options that focus specifically on ECS are the following. Amazon has created two reference architectures to handle service discovery using, you guessed it, Amazon Web Services. Amazon has released the code, templates and example applications to get you started. They have reference architectures that work primarily with load balancers and their Route 53 DNS service, also in conjunction with Cloud Watch and AWS Lambda, integrated directly into ECS. And also, reference architecture that doesn't rely on load balancers and only in the Route 53 DNS service, which is also integrated directly into ECS. Links for these are available in the video transcript.
Another option for service discovery is Consul. Consul provides health checking, service discovery and application configuration. For service discovery, it relies on DNS, while allowing HTTP as an option for service look up. The Consul team has provided code templates, and example application for deploying on ECS. Yet another option for service discovery is Weave Net. Weave Net uses software defined networking on DNS A Records to provide service discovery. Weave Net has also provided code templates and an example application for deploying to ECS. There are a lot of other options, we don't have time to cover them in this course. Some common ones include, Eureka from Netflix, ectd, Sky DNS built on top of ectd, Mesos DNS, Spartan, Distributed DNS, and Zookeeper.
One thing to consider when thinking about service discovery is that it might get wrapped into solutions that are not just focused on service discovery, but that are more general purpose, service management and service proxies. On such example is Envoy, but there are others that may take this approach and be general enough to work well with ECS. The next area of third party service integration that I'd like to cover is scheduling.
I talked a bit about scheduling at a high level earlier in this course, so let's review this concept briefly and dig into what the ecosystem of options looks like. And this is an area that is most likely to evolve and more options will become available. Scheduling tasks come down to placement. So, asking what instance should the task run on? And task management, in other words, what state is the task in? Schedulers need to consider such things as fairness, recovering failed jobs, scaling and helping to manage application updates. Scheduling algorithms need to take into account the various workload considerations, such as how to balance various length batch jobs, long running service jobs and the resources that these jobs will take. ECS has a built in scheduler that allows you to mix and match the third party schedulers.
A few different leading strategies for scheduling jobs have already emerged. The leading ones are Monolithic, exemplified by Kubernetes, two level pessimistic concurrency, exemplified by Marathon and Apache Mesos. And shared state optimistic concurrency, exemplified by ECS. To get started, you can safely use the built in ECS scheduler. If you have a special use case, or are looking to make performance improvements, it may be worth looking at these options. Another interesting area, where the third party ecosystem can make a great contribution to your ECS deployment is with monitoring.
Monitoring needs to considers at all levels of the stack and this includes ecosystem resources that traditional monitoring is designed for. Containers add another layer of complexity to consider. Some possible approaches include the following. AWS CloudWatch metrics, Datadog, Dynatrace, Sysdig Cloud and an Appdynamics extension. Some amount of monitoring can be added easily using AWS CloudWatch metrics. If you're looking for vendor solutions for companies focusing on these issues, take a look at these offerings. Another really interesting opportunity for third party services to improve your ECS deployment is with automation tools. In particular, there are a lot of tools focused on Continuous Integration and Continuous Delivery.
Continuous Integration includes building and testing all the different aspects of your application at each commit. For example, changes to your application, source code or infastructure. Continuous Delivery takes us one step further, by pushing the change all the way to production, through a pipeline of testing and promotion through the necessary test environments. Some options in this space include AWS CodeDeploy, Jenkins, Codeship, Convox, Circle CI and Codefresh. Now that we've covered the ecosystem, we'll wrap up with a summary in the last video of this course.
About the Author
Todd Deshane is a Software Build Engineer at Excelsior College. He previously worked for Citrix Systems as a Xen.org Technology Evangelist. Todd has a Ph.D. in Engineering Science from Clarkson University and while at Clarkson he co-authored a book called Running Xen also published various research papers related to virtualization and other topics. Todd is a DevOps advocate and is passionate about organizational culture.