image
Summary
Start course
Difficulty
Beginner
Duration
1h 11m
Students
153
Ratings
4.4/5
Description

Container orchestration is a popular topic at the moment because containers can help to solve problems faced by development and operations teams. However, running containers in production at scale is a non-trivial task. Even with the introduction of orchestration tools, container management isn’t without challenges. Container orchestration is a newer concept for most companies, which means the learning curve is going to be steep. And while the learning curve may be steep, the effort should pay off in the form of standardized deployments, application isolation, and more.

This course is designed to make the learning curve a bit less steep. You'll learn how to use Marathon, a popular orchestration tool, to manage containers with DC/OS.

Learning Objectives

  • You should be able to deploy Mesos and Docker containers
  • You should understand how to use constraints
  • You should understand how to use health checks
  • You should be familiar with App groups and Pods
  • You should be able to perform a rolling upgrade
  • You should understand service discovery and load balancing

Intended Audience

  • Sysadmins
  • Developers
  • DevOps Engineers
  • Site Reliability Engineers

Prerequisites

To get the most from this course, you should already be familiar with DC/OS and containers and be comfortable with using the command line and with editing JSON.

Topics

Lecture What you'll learn
Intro What to expect from this course
Overview A review of container orchestration
Mesos Containers How to deploy Mesos containers
Docker Containers How to deploy Docker containers
Constraints How to constrain containers to certain agents
Health Checks How to ensure services are healthy
App Groups How to form app groups
Pods How to share networking and storage
Rolling Upgrades How to preform a rolling upgrade
Persistence How to use persistent storage
Service Discovery How to use service discovery
Load Balancing How to distribute traffic
Scenario Tie everything together
Summary How to keep learning

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Welcome back. I want to wrap up this course with a summary of what we have covered, as well as talk about what's coming up next. There's been a lot of information packed into this course, so let's break it down and talk about what we've covered. We started out with an overview where we talked about why containers and container orchestration was important, and the reason is that containers provide teams a consistent way to package and deploy applications.

That consistency makes for a single entry point for deploying all containerized apps. We talked about Mesos containers after that. Mesos containers provide process isolation and are a great way to deploy and run legacy applications. We then talked about Docker containers, which use the same basic application definition files as Mesos containers, again, providing a consistent way to manage services.

We deployed an app called MiniTwit to a public node, where we were able to interact with it via a web browser. We went on to cover constraints, which allow you to control which agents your service will run on. This functionality allows you to target specific agents for reasons such as: running a service with GPUs, or running a service on specific operating systems, et cetera.

We then covered health checks, which are an essential part of every service. Health checks allow you to use http, https, tcp, and command based checks to determine if a service is healthy. Part of the health check also focused on readiness checks, which are used to determine when to start the health check.

As an example, if you have an app that takes somewhere between 30 seconds and a minute to initialize, you could use readiness checks to know when the app is actually running, and then once it's running, a health check can start it's cycle of checking. We briefly touched on application groups and dependencies.

Some apps, especially older legacy apps, aren't based on micro services, so you may need to have several services running together, as an example you may need a database server up and running before your web app. After that, was a demo of pods, which showed how pods allow you to share a networking stack.

In the demo, we saw a Docker container in a Mesos container, the Docker container ran engine x and the Mesos container sent requests to engine x via local host, and that was showing how they were connected. We covered rolling upgrades, which are built in a marathon, we saw how to use health checks in combination with rolling upgrades to ensure that we could deploy new versions, and the lesson on persistence showed us how to use persistent volumes for stateful workloads, for service discovery we saw how to use DNS as well as the API endpoint for fetching info about running services and their ports, to show how to use load balancing we used the marathon load balancer, which is based on HA proxy, the marathon LB runs on public nodes in the cluster and can serve as an internal or external load balancer, the demo showed how to use labels to configure the load balancer as well, and then to cap everything off, we created a more involved demo that used several services together.

The Tweeter demo brought together everything we've learned in the course so far. At the start of the course I said that there are some learning objectives that I thought by the end of the course that you should know. Specifically, I said you should be able to deploy Mesos and Docker containers, you should be able to understand how to use constraints, you should understand how to use health checks, you should be familiar with app groups and pods, and you should be able to perform a rolling upgrade.

So, having made it this far, you should now feel comfortable with each of these objectives. So what's next, what comes after this? Next up is to try out all of this for yourself. Hands on learning is the best teacher, so I encourage you to check out our labs. I also recommend that you create a cluster for yourself and play around, but keep in mind that the cluster consists of several agents, if you're running this in the cloud, you're going to want to keep an eye on pricing.

Okay, as I mentioned from the start, I enjoy hearing from you all. If you don't provide feedback, good or bad, I can't make any improvements. So, if you want to reach out to me via support@cloudacademy.com, or @sowhelmed on Twitter, I'd love to hear from you. That's going to do it for this course, thank you for watching, I hope that you've enjoyed it, I hope you've gotten something out of it, and I will see you in the next course.

 

About the Author
Students
101113
Labs
37
Courses
44
Learning Paths
58

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.