image
Pods

Contents

Intro
1
Introduction
PREVIEW3m 28s
2
Overview
PREVIEW3m 6s
Deploying Containers
Orchestration
7
8
Pods
3m 27s
10
Scenario Demo
13
Wrap Up
14
Summary
4m 28s
Start course
Difficulty
Beginner
Duration
1h 11m
Students
153
Ratings
4.4/5
Description

Container orchestration is a popular topic at the moment because containers can help to solve problems faced by development and operations teams. However, running containers in production at scale is a non-trivial task. Even with the introduction of orchestration tools, container management isn’t without challenges. Container orchestration is a newer concept for most companies, which means the learning curve is going to be steep. And while the learning curve may be steep, the effort should pay off in the form of standardized deployments, application isolation, and more.

This course is designed to make the learning curve a bit less steep. You'll learn how to use Marathon, a popular orchestration tool, to manage containers with DC/OS.

Learning Objectives

  • You should be able to deploy Mesos and Docker containers
  • You should understand how to use constraints
  • You should understand how to use health checks
  • You should be familiar with App groups and Pods
  • You should be able to perform a rolling upgrade
  • You should understand service discovery and load balancing

Intended Audience

  • Sysadmins
  • Developers
  • DevOps Engineers
  • Site Reliability Engineers

Prerequisites

To get the most from this course, you should already be familiar with DC/OS and containers and be comfortable with using the command line and with editing JSON.

Topics

Lecture What you'll learn
Intro What to expect from this course
Overview A review of container orchestration
Mesos Containers How to deploy Mesos containers
Docker Containers How to deploy Docker containers
Constraints How to constrain containers to certain agents
Health Checks How to ensure services are healthy
App Groups How to form app groups
Pods How to share networking and storage
Rolling Upgrades How to preform a rolling upgrade
Persistence How to use persistent storage
Service Discovery How to use service discovery
Load Balancing How to distribute traffic
Scenario Tie everything together
Summary How to keep learning

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Welcome back. In this lesson, we're going to check out pods. Pods is a group of collocated containers that share storage and networking. Take a look at this JSON here. Notice that there are two containers here. There's an nginx container and that's using Docker and then there's this Mesos container running a shell command.

Here's what I want you to focus on, there's are two separate containers, however because they're being defined as a pod, notice that the command here is running curl against localhost. If you deploy these two containers as separate apps and then the curl command was being run in this Mesos container, nothing's going to happen because there isn't something running on localhost at port 80 and that's because again, the web server isn't there.

Since these two containers are created together as a pod, they share networking, and they share their storage which in this case means curling against localhost is going to return the default nginx page. Let's see this in action. Notice that this uses the pod sub command rather than the app sub command.

Other than that it's the same thing. And this is going to take a few seconds to become running. And there it is. Notice that it's just one service but if we drill into it you see two different instances shown here. Let's check out the logs for the nginx app. Okay, so here you can see that there are several get requests to nginx from curl and let's check out the logs for the Mesos container and the output is what you'd expect, some markup from the default nginx landing page.

So, because this pod is a discrete unit, scaling the service will scale both instances, so let's scale this up to two instances. Okay. So, there are now two instances of this service which means there are two nginx and two Mesos containers and you can see that by drilling into the service and expanding each of these and there are all of the containers.

As always, you're not limited to the UI. If you want to interact to the pods from the command line you can do so with the pod sub command. So, here you can see that there are two instances of the pod running which is the same thing that we saw in the UI. In order to remove pods, you can use the pod sub command, so the command for that would be dcos marathon pod remove followed by the ID for the pod.

Okay, great, so there are all kinds of potential use cases for pods, however, one common one is get an existing legacy app into a container and manage with DCOS. If you've ever worked with legacy apps then you know there are usually some weird things to support. There are sometimes different console apps that need to run on the same machine that's hosting your main app, some of the requirements for legacy apps may make you think containers aren't really going to work without a lot of effort and in some cases that's probably true, however, shared networking and shared storage will help with a lot of those migrations.

Okay, let's wrap up here. In the next lesson we'll be using rolling upgrades to deploy changes. So, if you're ready, I'll see you in the next lesson.

About the Author
Students
100581
Labs
37
Courses
44
Learning Paths
58

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.