The course is part of these learning paths
With many enterprises making the shift to containers, along with Kubernetes being the leading platform for deploying container applications; learning Kubernetes Patterns is more essential than ever for application developers. Having this knowledge across all teams will lead to yielding the best results, for developers in particular.
This Kubernetes Patterns for Application Developers Course covers many of the configuration, multi-container pods, and services & networking objectives of the Certified Kubernetes Application Developer (CKAD) exam curriculum.
Help prepare for the CKAD exam with this course, comprised of 6 expertly-instructed lectures.
- Understand and use multi-container patterns to get the most out of Kubernetes Pods
- Learn how to control network access to applications running in Kubernetes
- Understand how Kubernetes Service Accounts provide access control for Pods
- Use the Kubernetes command-line tool kubectl to effectively overcome challenges you may face when working with Kubernetes
This course is intended for application developers that are leveraging containers and using or considering using Kubernetes as a platform for deploying applications. However, there are significant parts of this course that appeal to a broader audience of Kubernetes users. Some individuals that may benefit from taking this course include:
- Application Developers
- DevOps Engineers
- Software Testers
- Kubernetes Certification Examinees
We would recommend that to successfully navigate this course it would be helpful to have:
- Knowledge of the core Kubernetes resources including Pods, and Deployments
- Experience using the kubectl command-line tool to work with Kubernetes clusters
- An understanding of YAML and JSON file formats
Related Training Content
This course is part of the CKAD Exam Preparation Learning Path.
Kubernetes pods allow you to have multiple containers sharing the same network space and can also share storage between containers, often using a single container is the right choice for a pod, but there are several common patterns for when you should use multiple containers. That is the topic of this lesson. In this lesson, we will first explain the motivation behind pods, and then, we'll dive into three multi-container patterns, the sidecar, the ambassador, and the adapter. Pods are an extra level of abstraction above containers. What benefits do we get by having this extra level of abstraction? Containers alone aren't enough for Kubernetes to effectively manage workloads. Pods allow you to specify additional information such as restart policies and probes to check the health of containers. Pods also allow you to seamlessly deal with different types of underlying containers, for example, Docker and Rocket.
You deal with pods regardless of the underlying container runtime by allowing multiple containers to share network and storage in a pod, you can have tightly coupled containers co-located and managed as a single unit, without needing to package them as a single container image. This allows for better separation of concerns, and can improve container image reusability. The patterns in this listen will illustrate this benefit. The first pattern we will cover is the sidecar pattern. It is the most common one. As the name suggests, the sidecar pattern uses a helper container to assist a primary container. Common examples include logging agents that collect logs and ship them to a central aggregation system. The logging example is explored in the Kubernetes observability lab, here on Cloud Academy. Other examples include file sync services and watchers. We'll consider a file sync sidecar shortly.
All of these examples add useful functionality to the main container, and can be accomplished by adding a sidecar, rather than burdening the main container with additional responsibilities. It makes it easier for different development teams to work on each application separately, and also makes testing easier. Furthermore, you get the benefit of failure isolation. If the sidecar fails, say the logging agent fails, then the main container, say a web server, can continue to surf traffic. You can also independently update the sidecar container. It's worth pointing out here that all of these benefits are also true for the other multi-container design patterns that we'll cover. Let's take a look at a diagram for a file sync sidecar. The primary container in the pod is the web server container. The sidecar is a content puller. The content puller syncs with an external content management system, or CMS, to get the latest files for the web server to serve.
The web server serves the content to any clients that request it. How does the web server get the latest content from the content puller? They share the content by using a shared storage volume. The sidecar pattern is covered in depth in the Kubernetes observability lab. You'll see a pod manifest for configuring the sidecar pattern there. The second pattern we'll cover is the ambassador pattern. The ambassador pattern uses a container to proxy communication to and from a primary container. The primary container only needs to consider connecting to localhost, while the ambassador controls proxying the connections to different environments. This is because containers in the same pod share the same network space, and can communicate with each other over localhost.
This pattern is commonly used to communicate with a database. You can configure environment and variables in the primary container to control the database connection, but with the ambassador pattern, the application can be simplified to always connect to localhost, with the responsibility of connecting to the right database given to the ambassador. In production environments, the ambassador can implement logic to work with sharded databases as well, but the application in the primary container only needs to consider a single logical database, accessible over localhost. Some of the other benefits of the ambassador pattern are that during application development you can run a database on your local machine without requiring the ambassador, keeping the development experience simple. The ambassador may also be used by multiple applications written in different languages. Since that responsibility is taken out of the primary application let's visualize the ambassador pattern to reinforce the key points. This example is for a web app that uses a database for persistence.
The primary container is the web app and the ambassador is a database proxy container. The web app handles requests from clients and when the data needs to be updated the web app sends a request over local host, where it is received by the database proxy. The database proxy then forwards the request to the correct database backend. The database could have a single endpoint, or the database could be shared across multiple database instances. The ambassador can encapsulate the logic for sharding the requests in the latter case. Meanwhile, the web app is free from any of the associated complexity. Now let's consider our final pattern, the adaptor pattern. The adaptor pattern uses a container to present a standardized interface across multiple pods. For example, presenting an interface for accessing output in a standardized format for logs, across several applications. The adaptor pattern is the opposite of the ambassador pattern, in that the ambassador presents a simplified view to the primary container while the adaptor pattern presents a simplified view of the application to the outside world.
The adaptor pattern is commonly used for normalizing application logs, or monitoring data, so they can easily be consumed by a shared aggregation system. The adaptor may communicate with the primary container using either a shared volume when dealing with files or over localhost. For example, when getting metric data from a rest API. The adaptor pattern allows you to adapt an application output without requiring code changes. This may be required when you do not have access to an applications source code. Even if you do have access to the source, it is a cleaner separation of concerns to use an adaptor for each potential interface that may be required, rather than burdening the application with that complexity. Let's go through a demo illustrating how to implement the adaptor pattern. Here we have a pod manifest of a pretend legacy application. The application outputs raw metric data to a file every five seconds.
The problem is that the format can't readily be consumed by your monitoring solution. Your monitoring solution requires JSON formatted data. You can see in the args, that the metrics are simply the date and the raw output of the top command, which measures system and process resource usage. Let's view the contents of raw output using the exec command to display the content with cat. The contents include a variety of metrics, but your monitoring system is only interested in the date, memory used and user CPU percentage. Furthermore, the metrics should be presented in JSON format. This is where the adaptor pattern comes to the rescue. Let's go through the new pod manifest that uses an adaptor container, to adapt the metrics into the desired format. The legacy app container is the same as before, but with one difference. The app mounts a volume at slash metrics so that the raw metrics can be shared with the adaptor container.
The adaptor container also mounts the metrics volume, giving it access to the raw metrics. The adapters args are a few commands that parse the raw output and produce a JSON output, with the date, memory and CPU usage every five seconds. The JSON is output to slash metrics, slash adapted dot JSON. At the bottom of the manifest is the volume declaration. The volume is of type emptydir, which means the volume will have the same lifetime as the pod. There's no need for persistent volumes that can outlive the pod in this example. The main takeaways are that the container shares storage using a volume, and the adaptor presents an adapted view of the underlying raw metric data. Let's go ahead and create the pod, and we will use exec again to view the contents of the JSON metrics file. Now the output is in the format that the metric system needs, and it's updated every five seconds. With the metrics in a JSON file, by using the web server container to serve the metric file at a rest API endpoint, for a metric aggregation system that pulls in metrics.
Or alternatively, by configuring the adaptor to push the JSON file contents into a metric aggregation system. But I'll leave that as an exercise for you. In this lesson, on multi-container patterns for pods, we explained the rationale behind why pods are the Kubernetes smallest unit of deployment, and why they allow multiple containers. Next, we explained three common multi-container patterns for pods: The sidecar, the ambassador, and the adaptor. We finished with a demo about how to implement the adaptor pattern to adapt a legacy applications metric data to the format required for the monitoring system. The next lesson will cover patterns for providing and restricting access to pods, over the network in Kubernetes. When you're ready, continue on to learn more about Kubernetes networking.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.