CloudAcademy
  1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Going to Production

The course is part of these learning paths

Certified Kubernetes Administrator (CKA) Exam Preparation
course-steps 4 certification 2 lab-steps 6
Introduction to Kubernetes
course-steps 1 certification 1 lab-steps 2

Contents

keyboard_tab
Course Introduction and Overview
Production and Course Conclusion
10
play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 28m
Students2499

Description

Introduction 
This course provides an introduction to how to use the Kubernetes service to deploy and manage containers. 

Learning Objectives 
Be able to recognize and explain the Kubernetes service 
Be able to explain and implement a Kubernetes container
Be able to orchestrate and manage Kubernetes containers 

Prerequisites
This course requires a basic understanding of cloud computing. We recommend completing the Google Cloud Fundamentals course before completing this course. 

Transcript

Updates: The cluster ops landscape has changed a bit since the time of recording. CoreOS tectonic has been integrated into Red Hat's OpenShift for enterprise Kubernetes deployments. A best practices cluster bootstrapping utility called kubeadm is now a preferred way to create and operate small clusters on any platform. In the ecosystem section, Docker Compose can now natively deploy to Kubernetes.

Hello and welcome back to the Introduction to Kubernetes course for Cloud Academy. I'm Adam Hawkins and I'm your instructor for this lesson.

This lesson will be a bit different. Previous lessons taught you how to use Kubernetes, now we're covering a topic near and dear to my heart, production. Specifically what you'll need to know before going to production with Kubernetes. This includes production best practices, resources that didn't fit into the course, Cluster Ops, and the Ecosystem around Kubernetes. This lesson is just a presentation, so grab a coffee, lean back, and have a listen.

We only touched the surface of services. Services along with deployments are the two most important Kubernetes resources in my opinion. It's vital you will understand all of the features of these two things, and know how to manage traffic coming in from the public internet. The LoadBalancer type. We've only covered NodePort so far, and generally you would not use NodePort in production, instead you would use a LoadBalancer type. The LoadBalancer type instructs K8s to create a LoadBalancer and allocate a public IP on supported cloud providers. Take the IP address, add an A record to your domain server, and off you go. We didn't demo this functionality because it's not supported in Minikube. Whitelisting IPs. You can declare a whitelist of IPs, and K8s will automatically configure the incoming firewall rules. This is especially useful if you're running your application behind a service like Cloudflare. Service discovery via DNS. We've used environment variables in our previous lessons, but there is one important trade-off. Services must exist before pods and/or deployments consume them. This may not be possible for all applications. Instead you may use DNS. Kube brand of services also support DNS lookups via the kube-dns add-on. The Ingress resource. This is a beta API as of Kubernetes 1.5. The Ingress resource is a type of proxy for pods, and I use it as a firewall, also handle virtual hosting, or even as a quasi API gateway. I highly recommend reading the Ingress guide. Ingress resources will be a big thing going forward, and will dramatically change the way you deploy your applications with Kubernetes.

Our exercises have been done in a demo setting so far, so it's not really in line with how you would do things in production, however we have introduced features that you should use in production. Time to crystallize everything with the best practices for production pods. Never create pods manually. Always, always use deployments. Deployments give you the scaling and roller control. You get none of that with pods. Set the restartPolicy. Kubernetes supports always, which is the default, on failure or never. The default works from any applications, but of course there are scenarios where the default does not apply. You may need to let containers die, or only restart if they're of some error. Ensure that this volume makes sense for your application. Set resource requests and limits. Setting these ensures pods either get their required compute resources, or fail to schedule. Setting the limit ensures that containers do not consume an unexpected amount of compute resources. Separate critical and non-critical workloads. This practice increases resource utilization. Non-critical containers set a limit, and may be thrown anywhere in the cluster. Critical workloads can be guaranteed their minimum compute requirements, and are scheduled appropriately. Use node name or node selector. These two settings inform the scheduler about node characteristics. Let me give you an example. One entire CPU on a AWS C4 extra large instance is not the same as one on a T2 small. You can use these settings to place containers on an appropriate node. This is especially useful for heterogeneous clusters, or when workloads are divided into critical and non-critical workloads. Set liveness and readiness probes. This is mandatory for production environments, and I mean mandatory. These probes ensure your containers are working as defined by your probes, and not be generic container semantics.

Now we've clarified production practices for the resources introduced in previous lessons, now let's turn our attention to some that we have not covered, and you will certainly need in the future. A DaemonSet is a pod automatically scheduled to every node in the cluster. This is especially useful for running something like a monitoring agent, or a log collector like Fluentd. StatefulSet. This was called a PetSet prior to Kubernetes 1.5. A StatefulSet is similar to a pod, except that it has ordering and other stateful guarantees. You can use a StatefulSet to run a database setup, or even a stateful application. Consider something like MongoDB. You can define a StatefulSet that brings up an arbitrary, primary, and then secondaries in that order, however a word of severe warning, you're best off focusing on stateless workloads. This area is actively developing and will naturally mature over time. Job and CronJob. What would an application be without some sort of batch processing, or even recurring reports? A job creates one or more pods and tracks execution of the job through all the containers. A job can also be scheduled with the CronJob resource.

Let's change gears a bit. We've discussed everything provided by Kubernetes itself, however there is a vibrant ecosystem around Kubernetes with tools for cluster infrastructure management, all the way up to application packaging tools. We'll touch on both. First we'll talk about the cluster operations, a.k.a. Cluster Ops. I'll be honest with you, this is one of my personal favorite technical areas right now, and I think you'll like it too. Cluster Ops generally refers to the work required to provision, maintain and scale Kubernetes clusters. Let's start at the beginning. Clusters are not gods, although it may feel like it sometimes. They just don't spring into existence. They must be created. Kubernetes is a distributed system itself. It's non-trivial to build from scratch.

Kelsey Hightower's "Kubernetes the Hard Way" covers everything you need to know to build and run K8S from scratch. This is a fabulous resource if you want to get really down and dirty and learn all of it. Most of us, myself included, consider this a reference manual rather than a tutorial. I suggest you check it out, you'll find it detailed and long. However, this use case is better served by automation. Luckily the community has created tools to turn "Kubernetes the Hard Way" into Kubernetes the easy way.

Kube-up. Kube-up is a popular script to bootstrap a new cluster, however it's deprecated at this point in time. Odds are you'll see references to it in documentation, and especially old blog posts, however it should be avoided going forward. There are simply better tools out there these days.

Kops, short for Kubernetes operations. Kops is as official as you can get with open source Kubernetes tools. It is "the way to bootstrap new clusters on AWS or GCP." This is essentially Kubectl for clusters. You should start here if you want to bootstrap and manage a new cluster.

Google Container Engine, a.k.a. GKE for short. The K stands for Kubernetes. Google Container Engine is a hosted Kubernetes cluster. It's the easiest and most straightforward way to get Kubernetes in production right now. I recommend this option if you're using GCP. I also recommend moving cloud providers just to use Google Container Engine, and trust me, this pains me as a hardcore AWS user.

Tectonic. Tectonic is a Kubernetes solution from CoreOS. It wraps the official Kubernetes releases in a tight package from setup, scaling and general administration. It also includes the Kube-AWS CLI for managing Kubernetes clusters on AWS.

Kismatic, or KET comes from Apprenda. It's a useful suite of tools for provisioning, maintaining and testing Kubernetes clusters. It includes the Kuberang tool. This tool may be used to smoke test a running Kubernetes cluster. This might be especially useful if you're following "Kubernetes the Hard Way." There are plenty of other self-hosted and other tools for running Kubernetes in production. I recommend GKE because it's the easiest and fastest way to get going. There are plenty of other options to choose from as well.

The ecosystem also provides many useful projects outside of tooling and infrastructure. Be sure to do your own research, evaluate the trade-offs, before making the best decision for your particular use case. Helms is Kubernetes' package manager. You write packages called charts, then you can use the Helm CLI to install and upgrade charts as releases on your cluster. Charts contain all the resources, like services and deployments required to run a particular application. Here's an example, you can use the official MySQL charts for in MySQL for application A, then deploy it again with a different release name for application B. Here's another example, you can create a chart for your entire micro service application, packaging up all the services, deployments, pods, anything, and deploy it in the same way. The charts repository is one of the most active repos on the Kubernetes GitHub organization. This is something you should definitely keep an eye on.

We touched on Heapster a bit earlier. Heapster is a metric collection framework for Kubernetes. GKE installs it by default, so you'll have metrics for nodes, pods and other things right out of the box. You can also install Heapster with the InfluxDB backend and Grafana frontend in a single command, then you're rolling with nice metrics right out of the box.

Kompose converts an existing docker composed application to a set of Kubernetes services and deployments. This is useful if you're using an assisting docker-composed application, and want to take Kubernetes out for a spin, or simply if you want to save yourself from writing a ton of YAML or JSON.

I want to leave you with some other things to consider before putting Kubernetes into production and some other practices. First, keep your Kubernetes resource files in source control. These are really important files, and changes must be tracked. Ideally they are kept in the same repo as your source code. Remember, these things are just YAML and JSON. They should be linted and verified during CI. You definitely don't want mistakes in these files breaking your deploys.

Follow the pod best practices. We talked about it a little bit earlier, but you can also refer to this presentation and the official documentation for complete information.

Understand how to trigger, pause, resume and undo deployments. This is critical in production environments. Know how to debug failures in containers, and in the containers. The official guides are great for this kind of stuff.

Configure centralized logging. You can install Fluentd or similar to collect logs from all the containers and ship them off to something else. You can either run that system like ELK for example on Kubernetes. There's no need to have a separate infrastructure just to run this kind of stuff.

Be prepared for production maintenance. You will need to work on nodes and do various things, so make sure you know how to move containers off bad nodes, onto a new nodes, and back and forth. This is critical to maintain high reliability.

Also, secure your cluster. The official documentation covers this in depth. There are multiple ways to implement authorization, just choose one and set it up before going to production.

Plan your availability requirements. You may consider a multi-master setup, or even a federated cluster. Ensure you understand your availability requirements, what the risks are, what failure modes to expect, and have a plan for resolving any of the issues.

Back up your etcd Data. Kubernetes stores all of the internal data in etcd. If you lose this, then you're going to have a bad time, so make sure you back this up.

Finally, and arguably most importantly, I'd say add telemetry. Remember, it's not in production until it's monitored. You cannot have monitoring without telemetry data. There are a plethora of different metric collection tools, but really it doesn't matter which one you use, just pick one and go for it. It's easy not to deploy it with a dimmed set or anything else, just be sure to collect cluster, CPU, memory, percentages of those things, and the same for your pods. Decide on the headroom and create an action plan for when a threshold is breached. Regardless, just make sure you have this data, a plan to understand your cluster, and know what to do when the data goes in the wrong way.

This concludes our grab-bag lesson about going to production and a few other, well, tangential issues. Either way, I suggest you replay this lesson to internalize all the nuggets of information we've covered. I hope this lesson clarifies some questions you may have been asking yourself, and also provides you a strong road map of how to more forward with K8s. Here's what we put on the road map: production best practices, cluster ops tools, the ecosystem and production preparedness.

This also concludes the instructional part of the course, so take a deep breath, and reflect for a moment on what we've covered. After that you have a well-deserved pat on the back. So congratulations, you've made it to the end. Thanks for sticking with me. Join me for the next lesson where we recap and conclude the course.

About the Author

Students4768
Courses4
Learning paths1

Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.

Covered Topics