Developed with
HashiCorp

Contents

keyboard_tab
Vault Introduction
Vault Review
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 45m
Students357

Description

HashiCorp Vault provides a simple and effective way to manage security in cloud infrastructure. The HashiCorp Vault service secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing.

This course will enable you to recognize, explain, and implement the services and functions provided by the HashiCorp Vault service.

 

Agenda
In this course we learn to recognize and implement the core HashiCorp Vault services in cloud infrastructure. The topics we cover are as follows: 

  • Vault architecture and its core components
  • Vault policies and how they are used to grant or forbid access to operations in Vault
  • Secrets and secret management as performed within Vault
  • Vault cubbyholes and how they can be utilized
  • Vault dynamic secrets
  • Vault authentication and Vault identities

 

Intended Audience

This course will appeal to anyone looking to extend their knowledge of cloud security best practices, and to learn more about the tools and services available to help manage cloud security. If you are performing any of the roles below, we recommend completing this course. 

  • Architects and Developers
  • System Administrators
  • Security specialists
  • DevOps specialists
  • And anyone else interested in managing and maintaining secrets 

Learning Objectives

At the end of this course you will be able to explain and implement the HashiCorp Vault service, and you will also be able to implement the Vault CLI and API to execute tasks related to Vault administration. By completing this course, you will:

  • Understand the core principles of Vault, including how Vault can be used to manage and maintain secrets
  • Understand the key benefits of using Vault, including how to deploy and configure it within your own environments
  • Be able to evaluate and select HashiCorp Vault services
  • Know how to implement the Vault CLI and API to execute tasks related to administration and configuration

Prerequisites

We recommend completing the Cloud Academy DevOps Fundamentals Learning Path so you have a basic understanding of system administration and configuration tasks.

 

Format

This course includes approximately 1.5 hours of high-definition video, split across 9 lectures.

 

Feedback

We welcome all feedback. Please send any comments or questions on this course to us at support@cloudacademy.com

Transcript

Welcome back!

In this lecture we'll introduce you to Vault secret engines and the different types that are shipped with the Vault server. Vault secret engines are components which store, generate, or encrypt data. And as you'll see, are incredibly flexible. The agenda for this Vault secret engines lecture includes the following topics: motivation for Vault secret engines and intended purposes, vault secret engines supported out of the box, the Vault secret engine lifecycle, managing Vault secret engines, and the Key/Value secret engines for storing sensitive static data. 

In this section we'll start with an introduction to secret engines. In the context of Vault, secret engines are components responsible for managing secrets. Secrets are pieces of sensitive information that could be used to access infrastructure, resources, and/or data, and so forth. Some secrets engines simply store and read data, like encrypted Redis/Memcached. Other secrets engines connect to other services and generate dynamic credentials on demand. Other secrets engines provide encryption as a service, Time-Based One-Time Password generation, certificates, and much more. Vault comes with a number of secret engines bundled. The Key/Value and Cubbyhole secret engines are enabled by default and cannot be disabled, while other secret engines need to be enabled first before they can be used. 

Let's now cover each of the possible secret engines. Cubbyhole secret engines store arbitrary secrets with the configured physical storage for Vault namespace to a token. Paths are scoped per token. Key/Value secret engines store arbitrary secrets within the configured physical storage for Vault. Also known as generic secrets. AWS secret engines generate AWS access credentials dynamically based on IAM policies. Consul secret engines generate Consul API tokens dynamically based on Consul ACL policies. Database secret engines generate database credentials dynamically based on configured roles. This secret engine has database specific plugins for Cassandra, HanaDB, MongoDB, Microsoft SQL, MySQL, MariaDB, PostgreSQL, Oracle, and custom. Identity secret engine is the identity management solution for Vault that internally maintains the clients that are recognized by Vault. Nomad secret engine generates Nomad API tokens dynamically based on pre-existing Nomad ACL policies. PKI secret engine generates dynamic X.509 certificates. RabbitMQ secret engine generates user credentials dynamically based on configured permissions and virtual hosts. SSH secret engine provides secure authentication and authorization for access to machines via the SSH protocol. Supported modes are signed SSH certificates and one-time SSH password. Time-Based One-Time Password secret engine generates time-based credentials according to the Time-Based One-Time Password, or TOTP, standard. Transit secret engine handles cryptographic functions on data in transit. 

Secret engines must be enabled at a path so that the request can be routed. Enable operation enables a secret engine at a given path. With few exceptions, secret engines can be enabled at multiple paths. Each secret engine is isolated to its path. By default, they are enabled at their type. Such as, aws enables at aws/. Disable operation disables an existing secret engine. When a secret engine is disabled, all of its secrets are revoked, if they support it, and all of the data stored for that engine in the physical storage layer is deleted. Move operations move the path for an existing secret engine. This process revokes all secrets, since secret leases are tied to the path they were created at. The configuration data stored for the engine persists through the move. Tune operations tune global configuration for the secret engine such as time-to-lives, or TTLs. Secrets engines receive a barrier view to the configured Vault physical storage. When a secrets engine is enabled, a random UUID is generated. This becomes the data root for that engine. Whenever that engine writes to the physical storage layer, it is prefixed with that UUID folder. Since the Vault storage layer doesn't support relative access, such as ../, this makes it impossible for an enabled secrets engine to access other data. This is an important security feature in Vault. Even a malicious engine cannot access the data from any other engine. 

In this section, we'll cover how to manage and maintain secret engines. Secret engines can be managed by running the vault secrets command together with one of its subcommands. The possible sub commands are disable, which disables a secret engine; enable, which enables a secret engine; list, which lists the currently enabled secret engines; move, which moves an already enabled secret engine to a new path; or tune, which tunes a secret engine configuration. For example, altering the time-to-lives. In this example shown here, we mount the database secret engine twice, highlighting the fact that we can have multiple occurrences of the same secret engine as long as they are mounted to different and unique paths. 

In this section, we'll discuss how you can store sensitive data using the Key/Value secret engine. Most organizations own and retain some form of sensitive data. Sensitive data is data that shouldn't be seen or shared, and should remain confidential both at rest and in transit. Sensitive data in this context is more generally referred to as secrets. Storing and managing secrets in a secure way is often a challenge that requires careful planning. Some examples of secrets include customer payment data, such as credit card information; Cluster configurations, including passwords; SSL private keys; API keys; or access tokens. 

All of these types of secrets can be stored in the Vault Key/Value secret engine. Accessing these secrets can be achieved either by using the CLI or programmatically via the API. The Key/Value, or KV, secrets engine is used to store arbitrary secrets within the configured physical storage for Vault. All secrets stored within KV engine are encrypted using 256-bit AES in GCM mode with 96-bit nonces. The nonce is randomly generated for every encrypted object. The KV secret engine is enabled by default and is exposed via the secret/path prefix. This path prefix tells Vault to route traffic to the KV secret engine. It is possible to mount the KV secret engine to alternative paths concurrently. In doing so, each concurrent KV secret engine mount will be isolated and unique. Secrets are always stored as Key/Value pairs. Writing to an existing key in the Key/Value secret will replace the previous value. Subfields are not merged together. Let's take a look at some example commands for writing secrets using the KV secret engine. For starters, let's say we have a requirement to store an API key for Splunk. We would execute the following command: vault kv put secret/apikey/splunk apikey="the api key itself". Next, we can read values from within files stored in the local filesystem simply by appending the @ character to the name of the file. In this case, we would execute the following command: vault kv put secret/apikey/splunk apikey=@apikey.txt. 

We can also supply multiple Key/Value pairs within a single execution of the vault kv put command as shown in the last example. In this example, the acme.txt file contains a JSON formatted collection of Key/Value pairs. When the vault kv put command references this file, it creates the same set of Key/Value pairs under the secret/customer/acme path. Retrieving secrets back out of the KV secret engine is simple and intuitive. The first example shows how to retrieve all Key/Value pairs stored under the secret/customer/acme path. The second example demonstrates how to selectively retrieve just the value stored against the contact_email key. In the next example, we highlight what happens when updating an existing key within an existing path. It's important to understand that in this scenario, a merge does not take place. Instead, the Key/Value engine replaces the previously stored secret with the new secret. Secrets can be easily deleted by executing the vault kv delete command together with the path where the secrets are stored. As with other parts of the Vault server, you can forgo the Vault CLI in favor of the Vault API. The Key/Value secret engine API supports all expected CRUD operations for secrets. 

Key points when working with the Key/Value secret engine via the API are: First, the API is accessed over a TLS connection at all times. This ensures all secrets remain encrypted on the wire while in transit. Second, API routes should be prefixed with the Key/Value version. Third, a valid Vault token must be supplied in the X-Vault-Token HTTP header. The Vault API can be used to write, read, and delete secrets. The examples as shown here use the curl utility to craft the different types of API operations, and are sent over HTTPS to the Vault server running behind the domain name vault.rocks. When using the Vault API, you need to use the correct HTTP verb, POST, GET, or DELETE, when either writing, reading, or deleting the secrets to the secret engine. 

Okay, that completes this lecture on Vault secret engines. Go ahead and close this lecture, and we'll see you shortly in the next one.

About the Author

Students7694
Labs21
Courses52
Learning paths11

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.