In this introductory course, we take a look at Spinnaker as a whole to understand what concepts Spinnaker excels at as a continuous delivery tool. We'll also touch upon the microservices architecture that makes Spinnaker a unique asset in your toolkit.
If you have any feedback relating to this course, feel free to reach out to us at email@example.com.
- Define Spinnaker at its most fundamental level
- Explain the two primary concepts that make Spinnaker a strong solution for continuous delivery:
- Application management
- Application deployment
- Explain the service architecture that makes up Spinnaker
- Anyone who is new to Spinnaker
- DevOps engineers and site reliability engineers
- Cloud-native developers
- Continuous delivery enthusiasts
To get the most out of this course you should possess:
- An understanding of continuous delivery and what it solves for
- A strong understanding of cloud concepts
- Kubernetes knowledge is advised
- Free Spinnaker eBook - https://spinnaker.io/concepts/ebook/ContinuousDeliveryWithSpinnaker.pdf
- Spinnaker Website - Spinnaker.io
- Spinnaker Community - spinnakerteam.slack.com
- Spinnaker Lab - Spinnaker Lab
Now that we've talked about application management, let's talk about application deployment, or a set of features to construct and manage for our continuous delivery workflows. Application deployment all begins with the pipeline. A pipeline is our sequence of actions.
The pipeline in the application deployment is what the server groups are to the application management. It serves as the fundamental resource for the deployment and consists of several stages that can pass off parameters to one another upon completion. They can be run manually or triggered by another source.
Next, we have our stages. Stages are collection of tasks to perform. Tasks are defined as an automatic function abstracting away cloud specific actions. Currently there exists a wide variety of stages depending on your provider, which is defined by you in the setup of Spinnaker. These can range from a generic script to be ran, scaling a Kubernetes object via manifest to creating a whole suite of infrastructure in AWS.
Stages can be performed linearly or in parallel. There also is the ability to create custom stages to suit your needs should you need to with the REST API provided.
Next, we have our deployment strategies. I like to think of these as how safe do we really want to be? Deployment strategies are exactly what they sound like. They determine how our deployment is going to be fulfilled. Should it require a health check, use a Canary to determine success rate or should a manual operation be involved.
Now, this is a lot of information to take in, and it also helps to visualize it as we did similarly in application management. So, let's do that now. Here's a quick refresher on our deployment strategies that Spinnaker natively supports and recommends.
At the top we have our red black deployment or blue green, where we are standing up an exact replica of our environment to slowly direct traffic towards draining the old one. Next, we have our rolling red black deployment. This will begin to stand up an exact replica over time with an optional validate step between the two. After we validate the deployment the rolling will begin slowly, and the new production environment we'll take on new traffic.
Last, we have our Canary deployment. The Canary deployment employs whatever deployment you specify in the end, but forces a small percentage of traffic to go towards the new deployment before committing to it. With Canary deployments, it's typical to wait and analyze the results from a Canary to make sure that the new deployment with the new code rolling or not is production ready.
Let's continue on and take a look at a logical pipeline now. In this pipeline blueprint, we have seven defined stages with two additional stages running in parallel to their parallel partner.
At the beginning of our pipeline, we have a trigger that starts the whole process. Triggers can be time-based triggers, manual or cron, or event based triggers, such as Git, CI tool, or push to a Docker repo, a pipeline or traditional Pub and Sub.
In our second stage, we are finding an artifact named image from an account called TEST. This image is then deployed as a CANARY deployment where only a percent of users or services will consume it. From that CANARY deployment, we reach our first parallel stages in our pipeline.
Now there's two things happening in this parallel stage. First, there's a manual approval process requiring human intervention, likely analysis. Second, 30 minutes has to have elapsed from the previous stage. Both of these must be met in this example pipeline before moving on.
After the previous stage is completed, the deployment shifts into a red-black deployment. This means that we're standing up an exact replica of our PROD environment with a new code to make sure that it's going to work. From there, we move to our next parallel stages. This will tear down our CANARY and wait two hours to ensure that our red black deployment is sufficient before moving forward and destroying the old PROD environment.
Once the above stages are met, our pipeline is completed. This completes the circle of life for this Spinnaker Pipeline, and as we did with clusters let's take a deeper look at the Pipeline UI and dissect it even jumping into a specific pipeline. Here's our first half of our Pipelines pane of glass. Now I did say first half as we're going to be taking a deeper look into the promote to Prod pipeline here in a minute.
Pulling it apart, we have our application name in the upper left-hand corner. And similar to our Clusters pane, we can filter out our centerfold view by metadata, such as pipelines and status of those pipelines. In our centerfold view, we have three individual pipelines that we've created.
Our Docker Registry pipeline which likely pushes an image to a docker registry, deploy to stage, and promote to Prod. We're able to see and select specific stages to gain more information called stage details. These include, but are not limited to steps involved which are further broken down from stages, duration of those steps, full duration of the stage, a glance at stage configuration such as deployment configuration and task status, and whether the stage succeeded or failed.
Here is the second half of the pane of glass for pipelines. This view is specifically viewing a pipeline called promote to prod in the mdservice application. Currently, we're narrowed down in on the deploy to prod stage. Now that we've selected the stage, there's additional data revealed to us such as stage name, and what stage this stage should depend on before being allowed to carry on.
Since this stage acts as a deployment, there is a deployment configuration pane that has our cloud deployment data. Specifically, we're deploying to Kubernetes on the mdservice prod cluster with a red block strategy and a defined capacity. After we define the action that should be taken.
Spinnaker allows us to define how the stage should execute. Currently, if the stage fails it will stop the pipeline from continuing, but we have the ability to do more. We can set a window in which this stage will execute maybe during off hours when there's lower traffic, we could increase the timeout for longer running processes before it will fail the pipeline, or we could define that the stage should only carry on if we have a defined expression that returns true.
Lastly, we have configuration options for notifications which out of the box support email, Microsoft teams, Slack, SMS via Twilio, Webhooks, and the ability to stream events to a downstream listener. Let's review what's important in application deployment, starting with pipelines.
Pipelines are a sequence of actions. They serve as the fundamental resource for the deployment and consist of stages. They can be run manually or triggered by another source. Stages are a collection of tasks to perform. Tasks are automatic in nature and abstract away a lot of the API calls to cloud providers. Stages can be performed linearly or in parallel, and there exists the ability to create custom Stages to suit your needs.
Next, we have deployment strategies where we determine how safe do we want our deployment to be. Should it require a health check use a Canary to determine success rate, or should there be a manual operation involved? Ultimately, we got to look at the Spinnaker pipeline UI and how a pipeline acts.
That's it for this lecture. When you're ready to continue on, we're going to be looking at how Spinnaker is architected, and looking at these microservices that comprise it. Look forward to seeing you there.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.