Course Intro
Microservices
Go
Kubernetes
Putting it all Together
Integrating with GitHub and Google Cloud
Conclusion
The course is part of this learning path
This course explores how to build microservices in Go and deploy them on Kubernetes to create reusable components that can be fully managed in the cloud. We'll talk about what a microservice is and its overall architecture. We'll then take a look at the Go programming language, its benefits, and why it works well for building microservices.
You'll also get an intro to Kubernetes including what it is, what it is used for, and the key components that are needed to get a microservice from code to be exposed on the internet.
We'll then combine these three services to do an example use case where we'll build a microservice in Go and deploy it on Kubernetes. Finally, we'll look at CI/CD integration with GitHub and Google Cloud and how you can automate your deployments.
Learning Objectives
- Learn about microservices and their overall architecture
- Learn about the Go programming language and why it's good for building microservices
- Understand how Kubernetes can be used to deploy microservices
- Learn about CI/CD with GitHub and Google Cloud
Intended Audience
This course is intended for engineers throughout the tech stack or anyone who wants to get their feet wet in DevOps and learn how programs can be managed in the cloud.
Prerequisites
There are no essential prerequisites for this course. However, we recommend that you have:
- Experience with at least one high-level programming language, whether that be Java, Python, or Ruby
- A conceptual understanding of Linux containers and/or Docker
Now we're going to cover the components of Kubernetes. As I mentioned earlier, there are a lot of components to go over, but we'll only be focusing on the components you'll need to get from installation to exposed application on the internet.
We need to start with a foundation. And in Kubernetes terms, these are the clusters and it's nodes. This is the baseline for Kubernetes to be running. A Kubernetes cluster is an all encompassing unit, comprising of a control pane, API and nodes.
A node is just an underlying VM. So if you're using Google Cloud, this means that when you create your cluster, it will create anywhere between one to N compute engine instances under the hood. And that's based on the requirements you set. I believe the default here would be, that's three VMs and that comes with two CPUs and four Gigs of RAM each. There will be a master node that is used to manage your cluster and communicate using the Kubernetes API.
Within your cluster, sitting on top of multiple VMs, Kubernetes will deploy your applications, distribute it across the VMs that have the most resources available. Once you have a cluster up and running, you can start working with the smallest component of Kubernetes, pods. And a pod is not a container. A pod is an encapsulation of one or more containers.
Normally, what you'll find in production, is that you have a one-to-one relationship between a pod and a container. That is, most of the time, you will have one container residing in a pod. Now, that doesn't mean that you have to do it that way. There are cases where you would want to have multiple containers running within a single pod. For example, you could have a containerized application that writes to disk or in this case, a shared volume and have another containerized application that reads from that shared volume when there are modifications. Rather than deploying two different pods for each application, you can house them within a single unit. This can reduce latency and as you scale out, the pods will scale and not the individual applications.
A key piece to note about pods is that they have internal networking only, as the containers within the pods are collated on the same host and are configured to share a network stack and other resources. In a production environment, you will almost never deploy a pod by itself. Instead, you'll leverage another component of Kubernetes to handle that for you, which brings us to the next step in the Kubernetes hierarchy that we're going to cover.
Deployments. Deployments provide declarative updates for pods and ReplicaSets. What they do, is manage the health and availability of your pods. A deployment will deploy a single or multiple pods depending on a template. Within Google Kubernetes engine, you'll find this under workloads.
When you create your first deployment, GKE will create the deployment and manage the pods being deployed into Kubernetes for you. It will create your ReplicaSets for the deployment and it will also create what's called a Horizontal Pod Autoscaler. This provides you with high availability and scaling if your microservices are getting heavy traffic.
If, for example, your shopping cart microservice is experiencing heavy load, Kubernetes will automatically spin up more instances of that pod to meet the demand of the traffic. Once the traffic settles down, Kubernetes will notice this as well and will actually scale back to your minimum amount of running pods. Usually one pod is fine to start with, but you may find that there is always heavy traffic, and you want to ensure that there is a minimum of let's say, three pods.
No matter what your requirements are, Kubernetes can scale up and then scale back down. Deployments also make updating your applications seamless. When your containerized applications are updated and published, you'll inform Kubernetes of this change. Without deployments, you would need to manually deploy pods with the new container and then remove them from your cluster.
With the deployment component, you can change the template to point to the newly published image. Kubernetes will bring up one new instance and ensure that it runs properly. It will then decommission an older version of the pod, spin up another instance and repeat this process until it decommissions all the old versions of your image and has the minimum amount of pods running for your deployment, all without any downtime for your users.
So we have an understanding of Kubernetes pods and deployments, but we don't actually have a way to interact with them publicly or really, internally. And that's where Services and Ingress come in. Kubernetes Services are an abstract way to expose an application running as a set of pods as a network service. This creates a policy with which to access your pods properly.
In Kubernetes, you don't need to modify your application to use any type of unfamiliar service discovery mechanism. Instead, Kubernetes gives your pods their own IP addresses and a single DNS name for a collection of pods and it will load balance between them. While the DNS name is internal, these services can be for internal or external network management.
There are three types of services I want to cover before discussing Ingress. The first and most basic type is called, a ClusterIP. This is the default service in Kubernetes. A ClusterIP has access within the cluster only. Think of this being used for an internal API communication between other deployments and services that need to be load balanced.
All services within Kubernetes require a service port and a target port. The service, for example, can listen on port 3000, but will forward traffic to your application running on port 5,000. The only way to technically expose a ClusterIP is through a Kubernetes proxy. This is done through a management tool that has access to your entire cluster. This is not safe for production. It is however great if you need to access an internal service on your local machine for testing or development, but again, this doesn't actually give you access for your users.
The next service we'll discuss is the NodePort service. A NodePort service is the most primitive way you can get external traffic directly to your applications. A NodePort, as the name implies, opens a specific port on all of your nodes.
Now remember, your nodes are compute engine VMs that are sitting elsewhere in your Cloud infrastructure. These compute engines have their own external IP addresses. Kubernetes opens up a port on them, that when accessed, will redirect traffic directly to your service and it does not matter which VM IP address you actually go to.
Kubernetes is smart enough to know that there's traffic coming through a specific node port and can redirect traffic accordingly to the underlying pods. This is similar to a ClusterIP service, but along with the service and target port, you will need to specify the node port on which you want to expose.
Now, there are a few downsides to this and they are really important. First, is that you can only have one service per port, not very flexible. You can only use ports that reside between 30,000, and 32,767. So a wide range of ports, but not really the most standard.
Another concern is if your nodes have an ephemeral IP address. There will be times when you need to update your node pool. If VMs are cycled, they could lose their IP address. If you have DNS name resolution pointed to your underlying nodes, you could lose access.
The last service, and what I think is the most common that you'll see, is the LoadBalancer service. Now the LoadBalancer service is a standard way to expose a service to the internet and it will give you a single external IP address that will forward all traffic directly to your service.
When using a LoadBalancer service, it points to an external LoadBalancer that is not within your Kubernetes cluster, but rather exists elsewhere within your Google Cloud infrastructure, just like the ClusterIP and NodePort services, you will need to provide at least one service port and one target port, but you're not limited.
The beauty of a LoadBalancer service, is that it can send almost any kind of traffic to it, whether that be HTTP, TCP, UDP, WebSockets, GRPC, really anything. As long as you have an exposed port and a target port, it can handle the traffic. The downside to this approach, is each service you expose with a LoadBalancer will get its own IP address, which might not exactly be what you want.
In your use case, just remember that as your services grow, you will have to pay for a LoadBalancer for each service and that can start to get expensive. One of the most common use cases I see for utilizing a LoadBalancer service is with an SSL passthrough. I was recently working on a project where the developers wanted end-to-end encryption from the web interface to their Node.js application. And a lot of times what you see is, that your SSL communication is terminated at the LoadBalancer level and then this un-encrypted traffic is passed to your application through reverse proxies.
Now, there's nothing wrong with that, but one of the requirements was any traffic on port 443 needs to be decrypted with the Express.js server rather than the LoadBalancer. The LoadBalancer service accepted all the HTTPS traffic on port 443 and passed it directly to the Express.js server also listening on port 443, and it did the decryption at the application level rather than at the LoadBalancer level. Consider this if you're looking to use end-to-end SSL encryption within your application.
So now that you have an understanding of the different types of services Kubernetes offers, it's time to talk about Ingress. First and foremost, Ingress is not a service. Instead, Ingress sits in front of multiple services and handles the routing based on the policies or rules you provide it.
Ingress is only compatible with LoadBalancer and NodePort services in your cluster. Remember, ClusterIP services have no external communication. Routing within Ingress can be based on any path or any subdomain of your choosing. By default, there must be some type of catch-all service.
If a request comes in that the Ingress LoadBalancer does not recognize, there must be some service to handle it. The Ingress LoadBalancer can handle SSL and TLS for you if you provide it with your certificates. Contradictory to what was just discussed in the previous example, there is SSL termination at the LoadBalancer level for Ingress, and there is not a way to do an SSL passthrough like the previous example.
Let's take a look at another diagram. You have your traffic coming in to a single point, probably some type of DNS, say api.myapp.com. And that request comes into the Ingress. It looks at the request that's coming in, based on the rules, it forward's the request to the proper service for handling that subdomain. If it is based on a URI, for example, /sales or /users, it will still forward the traffic accordingly. Any routes that don't match your policies will be sent to your catch-all service.
Okay, quick recap. Use Ingress to route all HTTP and HTTPS traffic to different Kubernetes services and these services manage your internal and external communication for your deployments. You instruct these deployments to handle pod creation, restoration, and replication with the use of a Horizontal Pod Autoscaler, which is automatically created for you when done through the Google Kubernetes Engine Console.
Remember, never create pods individually or manually in production. Always use Deployments or ReplicaSets. You now have all the building blocks you need to deploy a microservice to the Cloud. Next, we're going to solidify these concepts with a use case example.
Calculated Systems was founded by experts in Hadoop, Google Cloud and AWS. Calculated Systems enables code-free capture, mapping and transformation of data in the cloud based on Apache NiFi, an open source project originally developed within the NSA. Calculated Systems accelerates time to market for new innovations while maintaining data integrity. With cloud automation tools, deep industry expertise, and experience productionalizing workloads development cycles are cut down to a fraction of their normal time. The ability to quickly develop large scale data ingestion and processing decreases the risk companies face in long development cycles. Calculated Systems is one of the industry leaders in Big Data transformation and education of these complex technologies.