In this introductory course, we take a look at Spinnaker as a whole to understand what concepts Spinnaker excels at as a continuous delivery tool. We'll also touch upon the microservices architecture that makes Spinnaker a unique asset in your toolkit.
If you have any feedback relating to this course, feel free to reach out to us at email@example.com.
- Define Spinnaker at its most fundamental level
- Explain the two primary concepts that make Spinnaker a strong solution for continuous delivery:
- Application management
- Application deployment
- Explain the service architecture that makes up Spinnaker
- Anyone who is new to Spinnaker
- DevOps engineers and site reliability engineers
- Cloud-native developers
- Continuous delivery enthusiasts
To get the most out of this course you should possess:
- An understanding of continuous delivery and what it solves for
- A strong understanding of cloud concepts
- Kubernetes knowledge is advised
- Free Spinnaker eBook - https://spinnaker.io/concepts/ebook/ContinuousDeliveryWithSpinnaker.pdf
- Spinnaker Website - Spinnaker.io
- Spinnaker Community - spinnakerteam.slack.com
- Spinnaker Lab - Spinnaker Lab
How is Spinnaker designed? Spinnaker, at the high level, can be thought of as a binary architecture. This does not mean that each feature concept and understanding will fit into these models, rather, it is a good way for understanding how Spinnaker accomplishes what it intends to accomplish. The first of these two is application management or how your application receives its features for its required cloud resources.
The first design category is the infrastructure and management of our applications. Starting from the top down, we have our project, which is a logical grouping and configuration of applications. Optional, but highly recommended, the applications are grouped into respective projects. Here, we have our application, this is a microservice or app. And a best practice is designing a one-to-one parody for each application, meaning you don't overlap existing applications under a different parent application.
From there, we have our clusters. These are our logical grouping of server groups. This is the command center and the Spinnaker UI and clusters are where you can gain insight into what your server groups are doing, what's their health and various other metadata for your deployment.
Server groups are our base level resource. This is the lowest level of our Spinnaker environment. A server group is where your artifact resides and hooks into various cloud providers for persistence, and to run the software deployed. Now we have our load balancers. Load balancers are specified through an ingress protocol and port range. They balance traffic among instances in the server groups with the ability to health check and define health criteria for checks on endpoints.
Lastly, we have a firewall which defines network access. Firewall rules are defined by an IP range insider notation, a communication protocol, such as HTTP, GCP, etc, and port range. This is a lot of information to take in and it helps to visualize it. So let's do that now. Here we have what an application on a logical level would look like.
First off, we have two applications which denotes that we are likely under a project that groups them. Narrowing in on our yellow square, we see that application two consists of clusters with multiple server groups nested inside of them. Our blue circle is our actual cluster. This cluster D has two server groups inside of it. These server groups can have a number of instances, pods or other defined resources for the cloud provider in them.
Blowing up the cluster to get an inside view, we see that firstly, there's the required load balancer directing traffic to the appropriate server group. This load balancer will send traffic through an optionally configured firewall to the server groups of one or two. This diagram denounced that a blue-green deployment or red-black deployment is currently happening. And as such the load balancer is moving traffic to a new server group, server group two.
This is a great way to have a bird's eye view of the application management plan. But in order to further our understanding, let's have a look at what an actual application from the Spinnaker UI looks like.
Here it is, your single pane. Now there's a lot going on in this single pane, but we'll just be covering the fundamentals. In our upper left-hand corner we have our application or microservice called MD Service. Everything else you see on this page is beholden to this application and consequently, the project that the application is under. Moving down the left-hand side, we have drop-down menus to filter and display relevant metadata about our application. Here, we see we have an account configured, a region left a default to identifiable stacks of deployment, various health stages of the deployed infrastructure and availability zone if one were configured. Instance types if defined and an instance count.
Jumping to our centerfold view, we have three menu items on the top. Pipelines, which is where our pipelines pane of glass lives, clusters, which is where this pane of glass lives and finally, a tasks section to view relevant automated work that has been performed for the application, such as creation of instances, pipeline stages being completed and sending of notifications.
Additionally, in our center view, we have defined clusters. These clusters are broken down by their respective deployments, prod and stage. In the prod we are given the associated count and number of instances. Similarly, our stage environment has several server groups, but only one of them is active and has two instances in it.
Moving to our right-hand side, we have our defined provider, which is Kubernetes. Our quick action item under our server group actions, allows us to leverage built-in actions to quickly address issues or alter the state of our selected server groups. Dropping down, we have the health of the 128 pods or instances. The defined deployment, if configured, defined replicas for the server group, and lastly, our Kubernetes horizontal pod auto-scaling data. We will cover the similarly designed pipeline view in a later video. So don't worry about seeing that just yet.
Now, I know this is a lot to take in, but like any gooey or service, it can be overwhelming at first and as time moves on, you will gain confidence and understanding speed as well as control. Let's review. Okay, let's review what we covered in this lecture. First, we started off with projects. These are our logical grouping and configurations for our applications. They're highly recommended when dealing with lots of applications so that they can be grouped into their respective projects.
Next, we covered applications. Applications are our microservice or app, and the best practices for designing a one-to-one parody for each application, meaning we're not gonna be nesting multiple applications under one application in Spinnaker.
From there, we talked about clusters. Clusters are our logical grouping of server groups, and this is the command center in the Spinnaker UI. Clusters are where we can gain insight into what our server group is doing, the health and as well as various other metadata for our deployment.
Then we defined server groups. Server groups are our base level resources, such as a pod in Kubernetes, or an instance in a specified cloud provider. Server groups are where our artifacts reside and hook into these cloud providers.
Next, we covered load balancers, which are an ingress protocol and port range. They balance traffic amongst our server groups with the ability to do health checks as well as defined custom health criteria for checks on those end points.
Lastly, we defined our network access with firewall. Firewall rules are defined with an IP range insider notation with communication protocol attached.
When you're ready, join me in the next lecture where we will cover the partner to application management, application deployment. Hope to see you there.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.