image
Overview
Overview
Difficulty
Beginner
Duration
1h 11m
Students
153
Ratings
4.4/5
Description

Container orchestration is a popular topic at the moment because containers can help to solve problems faced by development and operations teams. However, running containers in production at scale is a non-trivial task. Even with the introduction of orchestration tools, container management isn’t without challenges. Container orchestration is a newer concept for most companies, which means the learning curve is going to be steep. And while the learning curve may be steep, the effort should pay off in the form of standardized deployments, application isolation, and more.

This course is designed to make the learning curve a bit less steep. You'll learn how to use Marathon, a popular orchestration tool, to manage containers with DC/OS.

Learning Objectives

  • You should be able to deploy Mesos and Docker containers
  • You should understand how to use constraints
  • You should understand how to use health checks
  • You should be familiar with App groups and Pods
  • You should be able to perform a rolling upgrade
  • You should understand service discovery and load balancing

Intended Audience

  • Sysadmins
  • Developers
  • DevOps Engineers
  • Site Reliability Engineers

Prerequisites

To get the most from this course, you should already be familiar with DC/OS and containers and be comfortable with using the command line and with editing JSON.

Topics

Lecture What you'll learn
Intro What to expect from this course
Overview A review of container orchestration
Mesos Containers How to deploy Mesos containers
Docker Containers How to deploy Docker containers
Constraints How to constrain containers to certain agents
Health Checks How to ensure services are healthy
App Groups How to form app groups
Pods How to share networking and storage
Rolling Upgrades How to preform a rolling upgrade
Persistence How to use persistent storage
Service Discovery How to use service discovery
Load Balancing How to distribute traffic
Scenario Tie everything together
Summary How to keep learning

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Welcome back. In this lesson we're going to take a look at what container orchestration is and the responsibility of the orchestrator. Imagine you decide to run your applications inside of Docker containers just so that you have a single deployment process. After all, if all of your apps are inside of a container, then once you master deploying and running containers it doesn't matter what's running inside.

Before containers if you were running your applications on virtual machines, you might deploy your applications out to the different servers and have one instance of the application running per server. When you needed to scale you might add more virtual machines to the load balancer. When managed this way, each VM served one role and if the server was underutilized it didn't matter because you're still paying for it to run.

Now if you try and use Docker containers in the same way by running one per server, it would work and I've even seen it work and in some cases, it may be the best option. However, by and large, if you're only running one instance of the container per VM, then you're not getting the full value out of containers or the VM itself.

Containers provide more than just a standardized way to deploy applications. They also provide process isolation, multi-tenancy, and more. In order to get the most out of containers, you need to treat them differently. They're not quite virtual machines and they're not quite a standard binary and this is where container orchestration tools show their value.

Container orchestration tools allow you to get the most of containers by providing functionalities such as scheduling, resource management, and service management. Scheduling includes tasks such as getting a container running on a node with available resources, replication and scaling, resurrection in the case of a container process being terminated, rolling deployments, upgrades and downgrades, collocation, and more.

Resource management covers resources such as CPU, GPU, memory, volumes, ports, IP addresses, and more. And service management covers the sorts of things that ensure your services are running and healthy, so that includes things such as load balancing and health checks. In the past few years several popular orchestration tools have cropped up.

By default the one provided with DCOS is Marathon and as of 110 there's also Kubernetes but we're going to focus on Marathon. Marathon is an orchestration tool for managing long-running containerized tasks and common use cases are things such as web applications, app servers, databases, API servers and other long-running processes.

Marathon can run two different types of containers. It can run Mesos containers which are based on C groups and namespaces and it can also run Docker containers. So, in the next couple of lessons we're going to jump right in with both Mesos and Docker containers. So, if you're ready to see Mesos containers in action, then I'm going to see you in the next lesson.

 

About the Author
Students
101113
Labs
37
Courses
44
Learning Paths
58

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.