The course is part of this learning path
DC/OS: Container Orchestration
Container orchestration is a popular topic at the moment because containers can help to solve problems faced by development and operations teams. However, running containers in production at scale is a non-trivial task. Even with the introduction of orchestration tools, container management isn’t without challenges. Container orchestration is a newer concept for most companies, which means the learning curve is going to be steep. And while the learning curve may be steep, the effort should pay off in the form of standardized deployments, application isolation, and more.
This course is designed to make the learning curve a bit less steep. You'll learn how to use Marathon, a popular orchestration tool, to manage containers with DC/OS.
- You should be able to deploy Mesos and Docker containers
- You should understand how to use constraints
- You should understand how to use health checks
- You should be familiar with App groups and Pods
- You should be able to perform a rolling upgrade
- You should understand service discovery and load balancing
- DevOps Engineers
- Site Reliability Engineers
- Familiar with DC/OS
- Familiar with containers
- Comfortable with the command line
- Comfortable with editing JSON
|Lecture||What you'll learn|
|Intro||What to expect from this course|
|Overview||A review of container orchestration|
|Mesos Containers||How to deploy Mesos containers|
|Docker Containers||How to deploy Docker containers|
|Constraints||How to constrain containers to certain agents|
|Health Checks||How to ensure services are healthy|
|App Groups||How to form app groups|
|Pods||How to share networking and storage|
|Rolling Upgrades||How to preform a rolling upgrade|
|Persistence||How to use persistent storage|
|Service Discovery||How to use service discovery|
|Load Balancing||How to distribute traffic|
|Scenario||Tie everything together|
|Summary||How to keep learning|
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Welcome back, in this lesson we're going to take a look at two load balancing options for DC/OS. The first being virtual IPs and the second being the marathon load balancer. Virtual IPs are provided by the built in Layer 4 load balancer named Minuteman. Virtual IPs also called VIPs allow you to assign a virtual IP and port to a service.
Minuteman will be able to distribute the traffic between your app instances and you'll be able to access the service from inside of the cluster on that virtual IP. Because you can interact with services from inside the cluster via a fixed virtual IP this also serves as a form of service discovery. Though again it is limited to being a Layer 4 option that is only accessible to other services inside of your cluster.
VIPs have some requirements, specifically you can't firewall traffic between nodes, you can't change the IP local port range and you'll need to use the ipset package, so that has to be installed and you need to run a stock kernel from CentOS RedHat Enterprise or CoreOS. Okay, we're going to see virtual IPs used in the next lesson.
So for now, let's shift towards Layer 7 load balancing and talk about the Layer 7 load balancer called marathon-lb. Marathon-lb is not built into Marathon or DC/OS, rather it's available as a separate package from the DC/OS catalog that gets installed onto a public node. It's based on HAProxy which is known for its high performance and can be used to balance the traffic between internal and external services.
Most of the configuration for marathon-lb takes place with app labels and that makes it really easy to configure in the UI or in the JSON. In previous lessons we ran the minitwit app on a public node so that it was accessible to the outside world. Running apps on public nodes is fine, though you'll more likely than not have more private nodes than public, so it's more common to have your apps running on private nodes and then serve them up to the end-users with a load balancer that's running on the public nodes.
So let's use the marathon load balancer to serve up traffic from minitwit instances running on private nodes. Recall that marathon-lb is available as a package from the DC/OS catalog. So let's install it with the command, dcos package install marathon-lb. Let's say yes to this to confirm the install. Okay, and let's head into the UI and see if it's installed.
Okay, you can see that it's installed here, it's running as a service and there's nothing else running. So, now that we have marathon-lb up and running, it's waiting to do something. Let's head over to Visual Studio code so I can show you the JSON for this app. The top section is going to be familiar, it's going to use the minitwit container and the container will use port 80, the service is going to use port 10,004 and by default marathon-lb uses service ports 10,000 through 10,100.
There will be three instances of this app and since there isn't an accepted resource role set to slave public, these will run on private nodes, which is the default behavior. Here's the important part of the app. These labels down here, these labels are how we configure the load balancer the first two are settings related to deployments.
If you recall way back in the lesson covering rolling deployments, I mentioned that the marathon-lb was capable of additional types of deployments. These two settings are related to that. Marathon-lb is able to perform blue, green and canary deployments. Out on Git you can find a Python script named zdd which stands for zero downtime deployment and it shows an example of how to handle deployments.
However it's really outside of the scope of this course so I'm just gonna leave it for now. The next label tells the load balancer that this is an external service and then there's this strings looking label that sets the virtual host value to the public agent's public IP address. In case you're wondering how the load balancer knows which port to use, that's where this zero in the VHOST label comes into play.
That zero represents the index in the port mappings array above. So in this case, there's only one port at index zero and as you may have guessed that means if you add another port mapping, you'll want to add it after all the others or you'll have to change that zero in the VHOST label accordingly. Alright, let's create this app.
The end result should be three instances running on a private agent and the marathon load balancer which is running on a public agent, and that marathon-lb is going to distribute the traffic to the private agents. So this deployment is going to take just a moment and there it is. Over here in the browser we're pointing at the public IP address.
Now don't worry about this 503, this is happening because the browser refreshed when the marathon load balancer was running but before the app was actually running on the private nodes. So that was because there's nothing for the load balancer to serve up. And by refreshing this page, we'll see the familiar minitwit app, only this time it'll be running on a private node and served up by the marathon load balancer.
And there it is. So marathon-lb makes it easy to have your applications running on the private nodes and then the marathon load balancer runs on the public node to distribute the request. Because marathon-lb is based on HAProxy that means you're going to be able to interact with HAProxy including hitting the stats page.
So on the public node where marathon-lb is running, if you append HAProxy to the end of the URL you can see the stats. So there's a lot of value in having this sort of info, however this is publicly accessible by default. So you're probably gonna wanna lock this stuff down. Alright let's wrap up the lesson here.
In the next lesson we're going to build out a more complex scenario than we've previously built. So if you want to keep learning, then let's get started in the next lesson.
About the Author
Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.
When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.