1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Introduction to HashiCorp Sentinel

Sentinel Design

Contents

keyboard_tab
HashiCorp Sentinel
1
Introduction
PREVIEW3m 47s
2
Sentinel Design
PREVIEW24m 57s
play-arrow
Sentinel Design
Overview
DifficultyBeginner
Duration30m
Students30
Ratings
5/5
starstarstarstarstar

Description

Sentinel is a fairly easy-to-understand language and framework for implementing Policy as Code in your organization, allowing for a large breadth of disciplines to be involved in the policymaking process.

In this introductory course, we’ll cover what Sentinel is through a few different pillars: The Why, The How, and The When of Sentinel.

If you have any feedback relating to this course, please let us know at support@cloudacademy.com.

Learning Objectives

By the end of this course, you will have learned:

  • The Why
    • Why was a Policy as Code framework like Sentinel developed? And what does it aim to solve for?
  • The How
    • How is Sentinel designed? What encompasses it as a language and framework?
  • The When
    • What are some ideal situations in which Sentinel should be implemented?

Intended Audience

  • Managers
  • DevOps Engineers
  • Security Engineers
  • Cloud Engineers

Prerequisites

To get the most out of this course, you should have:

  • Familiarity with Infrastructure as Code
  • Some programming experience
  • Familiarity with organizational policies

Resources

Hashicorp.io - Sentinel Documentation

Roger Berlind’s Common Functions

Cloud Academy GitHub Repo

Transcript

HashiCorp Sentinel, what is it? Well, it's a policy as code framework and works out of the box with HashiCorp's Consul, Nomad Terraform which we'll be looking at today and Vault. Is an enterprise offering so it's not open source, but if we read some more there are Terraform cost estimators built into the Sentinel policy runtimes, which means that enterprise offering that we're going to be paying for can quickly be recovered through building policies that estimate costs for our Terraform resources. And lastly, it can be executed in the cloud and locally.

Well, let's talk about the why. Sentinel as a policy as code framework abstracts away the GUI and outdated policy systems either through a Wiki or through tribal knowledge with maybe a few filing cabinets. And it provides all the benefits that infrastructure as code does such as codification, version control, testing and automation. Let's explore those features a little bit more starting with codification.

Here we have the similar codification of infrastructure as code, but policy as code. So our information and logic represent actions that can or cannot be acted upon. From there we can commit that code to a version control system. So we have simple text files that can be committed to a VCS such as GitHub. We have robust testing with the ability to test locally and remote providing us with a continuous integration workflow as well as create fake testing mocks which we'll explore later.

All of those previous features allow us to have a robust framework for automation, with reusable policies, integrated workflows in creating the ability to follow our best practices automatically. Let's get into the mid of Sentinel now, starting off with an overview. Which we're gonna be talking about the language which Sentinel was written in Sentinel.

We're gonna be covering the basics, which are policies, rules, a configuration file modules and imports. And we're gonna be talking about some familiarities specifically within the programming languages and logical familiarities. From there, we're gonna be talking about development specifically with the CLI and testing and finally with the cloud with policy checks on our git commits.

So let's talk about the language now. Well, technically speaking Sentinel is UTF-8 encoded. It was written for integration with Go and it's embedded in HashiCorp products, which means it's incredibly fast conceptually and practically, it takes less than an hour to learn less than two to write. And it's incredibly non programmer friendly, which means that we can have logicians, compliance officers, security officers, and the like involved in the process.

We also have incredible programmer friendly interface. It's incredibly extensible with its own host of plugins, standard imports and built in functions. Lastly, the language operates on a pass-fail philosophy known as enforcements. Let's take a look at those enforcements now.

The enforcements, again operate on pass, fail philosophies. And our first one is called an advisory enforcement. This policy can fail but there needs to be a warning shown or logged. Next we have soft mandatory which means this policy has to pass unless there's an override implemented. So this can be a manual intervention from an administrator with the correct permissions to allow the policy check to fail and continue on with the deployment of your Terraform resources.

Lastly, we have hard mandatory. This policy check must pass. And it's only skippable by the removal of the policy itself. Hard, mandatory is that a fault policy enforcement level for all policies. Policy as code will be nothing without the policies themselves. So policies are singular files ending in .sentinel and it's best to have a one-to-one mapping for your policies, addressing certain behaviors.

There are groupings of logical expressions. So for example, three is greater than two. And then can include a logical operations such as is, in, any, all and matches they're executed top-down they have a strong usage of variables and they require main, main is an evaluative variable that is checked for passing or failing conditions. And you can see passing and failing conditions for certain types of main to your right.

Main is evaluated on the execution of the policy not before or after. At the top of our policies. It is highly encouraged to explain the policy in a straightforward manner on what it accomplishes. This description will also print out on failures such as this policy evaluates 10 divided by five and has no rules included. The body of this policy usually contains the logic of said policy, meaning we're going to be assigning the basic arithmetic variable to 10 divided by five is two.

After that, the end of our policy calls me to evaluate what logic we would like see invoked. So main is basic arithmetic. This would evaluate to true as one of our previous conditions has shown such. This is how the policy would look in my favorite code editor VS code. At the top, we have our comment. Then we have our basic arithmetic variable followed by the logic involved. And finally we evaluate main if we were to apply the Sentinel CLI such as Sentinel apply the easy policy .sentinel we would get a pass, meaning that main has passed. This is a much more complex policy.

This policy evaluates S3 buckets that are going to be created from a Terraform plan and requires them to have their access control list attribute set to private. If not manual approval is required or the bucket to be created, which means that we have a soft, mandatory enforcement on this policy. We'll dissect this policy and other policies later. Within our policies, we have rules and rules are denoted by rule followed by curly brackets with their logic and end curly brackets to close out the logic within the rule. They're highly encouraged for testing purposes which means we can individually test each rules result.

Once we get to the Sentinel CLI they can have several rules and they're only evaluated when they're required or called, not when they're created. So even though policies are executed top-down until that rule is called, it is not evaluated. So main calls that rule. And even if it is specified before main it is not evaluated until main. The computed ones and saved. And when they're computed, that value stays the same. So if you have a time rule that evaluates some logic based off of time and the top of the policy, once it is called it will execute on that time in that and save that time on. And finally a rule must return a Boolean string, integer, float, list or map else a runtime error will occur.

So let's take a look at that S3 bucket policy, but with multiple rules involved. Here we have the exact same policy except we're defining two new variables with two rules. They're at the top. And they require that their work hours and work days. And specifically, they're looking between the times of 6:00 AM and 5:00 PM in a weekday name of Monday through Friday.

At the bottom main calls those rules. Defined as a rule itself and that all rules return true. The Sentinel configuration file it denoted by Sentinel.hcl. It specifies our policies and where their location may be. Although that'd be remote or local. And the corresponding enforcement level with those policies it can also include important mock information which is fake data for testing purposes. It specifies our modules and where their location is. And other important information. These can be either reusable code or rules. It also specifies globals which are globally scoped variables and they do not require an import.

Finally, we have parameters and these can be passed in on execution of policy. So let's take a look at all of these in some examples of them now. Starting with our policies here we have three policies. We have policy defined by that policy name source and enforcement level. You'll see that we have three different enforcement levels and three different sources. All of them are local.

What if we wanted to define a remote source? Well, that's just as easy as pre-pending the specific protocol that we're going to be using to grab that policy. On the top level policy, you see that we specify source get from there, we have our first module. This is an important remote module that I'm importing as the latest TF plan functions. Followed closely by the local version of that.

So if we wanted to make important local changes to our TF plan functions module, we could do so as such Here, we have mock, mock acts as an import when we import it in two Sentinel policies. So if we typed in import time, we would get time but we would only get this mock data that we've configured in our Sentinel HashiCorp file. This means that if we were to call upon time we would only get the one data point of noon. After that we have work which includes our values of weekday followed with days. This means that we're going to be working within Monday through Friday.

And lastly, we have our parameter of tired which is just evaluating to yes. Let's talk about a modules and imports. As they serve a common functionality to speed up for Sentinel policy writing. They're defining the configuration file. As we previously saw, they reuse code as an import. So when we say import plan, we are importing that module. They can be local or remote. They do not require main. They cannot use parameters and they can be allowed or denied. There already is a lengthy list of imports already available to us, such as Base64, decimal, HTTP JSON, time, strings and more.

Let's talk about some programming familiarities. Starting off with built-in functions. These are available in all policies, starting with append which allows to add an item to the end of the list. Next we have delete which is removing an element from a map. From there we have error which allows us to raise an error with a message. Keys allow us to return the keys of a map. Length allows us to return the length of a string or collection. Print allows us to include logging or debugging. Range allows us to return a list of numbers in a range and finally values returns the values of the map.

So what else can we do with the Sentinel language? Well, for starters, we can loop. We can create custom functions, we can slice and we can have conditionals such as if, else, else if, when and case. We also have the option of including maps, lists, logging and errors, and finally plugins. which Sentinel has its own SDK available for writing in any language. If you wanna develop Sentinel, the Sentinel-CLI is the way to go. Starting off, it's invoked with Sentinel. This three supported sub-commands, starting with apply. It's just executes the policy at the local level.

Next, we have test, which requires a path for testing conditions. Last we have format, which formats, all policy files to canonical format. Let's take a look at apply. Our apply options are as such color, which includes standard output color on the apply run, config which we can choose which policy we want to run. JSON which instead of the standard output followed with apply, we get a JSON output. And if we want to further iterate, we can use JSON rule where we can specify the specific rule.

Next we can pass in globals with global key, and then that value and the same is true for parameters. And lastly, we have trace and it's always shown for failures, but I recommend using trace always because we get more information, which is always better to determining how our policy is actually acting.

Similar to apply we have test and the test options are color, JSON, run, which is similar to config, except for we can specify which test conditions we want to see and verbose, where we can see a whole ball of wax what's going on when we're testing our policies. There are some requirements with test such we have to have a file path existing for testing. So for example, this would look like test followed by our policy. And then our testing conditions written in either HashiCorp language or JSON.

I should note that HashiCorp language should be the default at JSON will only be supported for a limited amount of time. So we're gonna be covering an example. And in this example, we're going to have a policy that is restrictive of Kubernetes Deployments that are not appropriately configured for being deployed into a production environment with a namespace that begins with prod dash. We're gonna be leveraging existing imports, such as common functions, the modules that we've been seeing throughout this course, and we're gonna be creating two similar deployments as policies. One with a correct namespace prefix and one without. They should each return the test criteria that there'll be matched against. Which one we'll expect to pass and one will expect to fail. However, both of these should pass because their main variable will be expected to pass and to fail. And our mocks were generated with Terraform enterprise.

So let's jump into that example, starting off with our configuration file. Here, we have our first policy which is just checking against our Kubernetes namespace with our source local and our enforcement level as advisory. Next we're importing those common functions that are also useful to us with module TF plan in our source of local. Here is our failed JSON.

At the top, we have the modules that it relies on to check that policy against. Next we have our mock, which is our mock TF plan data which is a mock Terraform plan data that it'll check against. From there, we have tests and we expect this test evaluation for main, to be false. This is the exact same, but for pass we had the same modules. Our mock is slightly different in that it will be referencing a different pass that North plot file.

And finally, we have our tests and we expect main to return to true. Here is our fail Sentinel file or a fail mock file. You can see that we have our Terraform version at the top follow with our resource changes and at the white and the bottom, we have our namespace which is just testing server. Here is the same, but for the past condition with our namespace ending, with or beginning with prod dash front end. And here is our policy.

At the top, we import our functions. We grab all the deployments. We define the namespace variable that is going to be searching through with that Terraform plan mock data. And we have our prefix of prod dash. After that, we have an improper namespace variable which filters all the tributes that do not have the prefix followed by the above variables.

Next, we have our violations which count the improper namespace messages. The improper namespace returns, two maps. One is messages which has our violating messages, such if any, are raised that something is wrong in a policy check should fail. Finally, we evaluate against main which ensures that our violations are zero.

If we invoke the Sentinel test policy and specify the run of Kubernetes which is just our Kubernetes namespace check policy we'll get a pass on three fronts. The first is the first policy pass and the second and third are the test conditions for the fail and pass JSON data. If we want more information which I always recommend, we append verbose.

So here we get the installation of our modules the passing for the first policy and the failed JSON. And then we get the corresponding information regarded to that failed JSON policy check with logs and trace. Next we have our pass at JSON which has the same gives us our trace and our value.

Now we're gonna be taking a look at an example with development in the cloud. The cloud is a lot easier but gets a little bit more time consuming. So we're having a Terraform Cloud synced to GitHub repo. So on commits and our Terraform folder it's gonna have specified policies that will be ran. This Terraform folder includes our two previous Kubernetes Deployments and they are not destined for Kubernetes cloud provider. So they will not be deployed. And these policies are in the same repo and configured as a policy set Terraform Cloud. This policies that will be ran are specified under the policy folder of Sentinel policies and they're enforced on all workspaces and this enforcement is still advisory. So let's check out that workspace.

This is our workspace. This is where our GitHub repo will check against runs will be checked against and then subsequently have runs ran against it. You can see at the bottom, it is sync to my repo of tsarlewey/Sentinel-Demo. Our policy set is configured within Terraform Cloud. So here I'm just calling it Sentinel-Demo-prod-Kubernetes and our policy set source is what we previously mentioned of under Sentinel slash policies, except for it's now in a GitHub repository, you can see that it's enforced on all workspaces and it was last updated 30 minutes ago.

Here are our subsequent Terraform Kubernetes Deployments on the left-hand side. You'll see that it has failed dash front end. And on the right-hand side, we have prod dash front-end. After we run a commit on the Terraform folder. Our plan is immediately ran. You can see here at the top, we have a commit message of remove invalid requests of resources.

You can see that I triggered the run from a GitHub repo a few seconds ago. After this plan is committed. We next get into the policy check and as such we have an advisory enforcement this policy check technically passed but the advisory failed, which means that we have something wrong with our Kubernetes namespaces. Well we know which ones are wrong and that's okay. So we're going to go ahead and just discard this run and say that policy checks are correct because we have no deployment model to a cloud provider.

One last thing to talk about the Terraform enterprise and Terraform Cloud. If we enable the cost estimation we can have cost estimation enforced on all workspaces. This will allow us to actually include in our policies, TF run, or import TF run. This will allow us to get into the cost estimate function which we can then enforce certain costs to be either above or below certain levels in our policies for our Terraform resources. So let's take a look at that now.

This policy does exactly what I previously said. It imports those TF ran functions as run, and then subsequently imports the standard decimal import establishes a limit and a max percentage increase and finally runs the cost for the validated variable which will limit the cost and percentage increase subsequent to those limit in max percentage variables defined earlier main is ran, and if it is true, then this policy will either pass or fail.

About the Author
Students507
Courses5
Learning paths1

Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.

Covered Topics