HashiCorp Vault provides a simple and effective way to manage security in cloud infrastructure. The HashiCorp Vault service secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing.
This course will enable you to recognize, explain, and implement the services and functions provided by the HashiCorp Vault service.
In this course we learn to recognize and implement the core HashiCorp Vault services in cloud infrastructure. The topics we cover are as follows:
- Vault architecture and its core components
- Vault policies and how they are used to grant or forbid access to operations in Vault
- Secrets and secret management as performed within Vault
- Vault cubbyholes and how they can be utilized
- Vault dynamic secrets
- Vault authentication and Vault identities
This course will appeal to anyone looking to extend their knowledge of cloud security best practices, and to learn more about the tools and services available to help manage cloud security. If you are performing any of the roles below, we recommend completing this course.
- Architects and Developers
- System Administrators
- Security specialists
- DevOps specialists
- And anyone else interested in managing and maintaining secrets
At the end of this course you will be able to explain and implement the HashiCorp Vault service, and you will also be able to implement the Vault CLI and API to execute tasks related to Vault administration. By completing this course, you will:
- Understand the core principles of Vault, including how Vault can be used to manage and maintain secrets
- Understand the key benefits of using Vault, including how to deploy and configure it within your own environments
- Be able to evaluate and select HashiCorp Vault services
- Know how to implement the Vault CLI and API to execute tasks related to administration and configuration
We recommend completing the Cloud Academy DevOps Fundamentals Learning Path so you have a basic understanding of system administration and configuration tasks.
This course includes approximately 1.5 hours of high-definition video, split across 9 lectures.
We welcome all feedback. Please send any comments or questions on this course to us at firstname.lastname@example.org
In this lecture, we'll introduce you to the Cubbyhole Secrets Engine. The agenda for this lecture includes the following topics. Cubbyhole Secrets Engine where we will discuss what they are and how to use them. Response Wrapping where we'll discuss the challenges with sharing tokens and how using response wrapping can help. And finally, we'll provide command examples as we progress.
In this section, we'll introduce you to the Cubbyhole Secrets Engine. The term cubbyhole is an American phrase where you get a locker or a safe place to store your belongings or valuables. This is called a cubbyhole. In Vault, cubbyhole is your locker. All secrets are namespaced under your token. If that token expires or is revoked, all the secrets in its cubbyhole are revoked as well. It is not possible to reach into another token's cubbyhole even as the root user. The Cubbyhole Secrets Engine is used to store arbitrary secrets within the configured physical storage for Vault namespaced to a token. In Cubbyhole, paths are scoped per token. No token can access another token's cubbyhole. Even the root token is prevented access. When the token expires, its cubbyhole is destroyed ensuring that all secrets within are erased and no longer accessible. By default, the Cubbyhole Secret Engine is enabled and mounted to the cubbyhole/path. The lifetime of the cubbyhole is linked to the token used to write the data into it. The Cubbyhole Secret Engine is unique and unlike other secret engines, you cannot move, disable, or enable it multiple times. Let's walk through an example where we'll use the Vault CLI to store a secret in a cubbyhole using a non-root token and then later attempt to access it using the root token. In the first command, we create a new non-root token and associate it with default policy. Next, we authenticate into Vault using the newly generated token. In the context of our current non-root login session, we'll write a secret into our cubbyhole.
In this case, we are simply storing a GitHub token. We can then read out this secret and the permission is expected as we are still operating within the same login context that wrote the secret. Let's now change our login to that of the root. Having successfully reauthenticated as root, we then attempt to access and read the GitHub token from the same path as before. And as expected, we find that the cubbyhole is empty. Similarly, we can fall back to using the Vault API to interact with the Cubbyhole storing, retrieving, or deleting secrets. In the examples shown here, we use the curl utility to craft our HTTP requests. We need to pass in a valid Vault token, setting it against the X-Vault-Token HTTP header. We choose the appropriate HTTP verb and if we are writing the secret, we set the data HTTP header with the name and content of our secret.
In this section, we'll introduce you to the concept of response wrapping. In many Vault deployments, clients can access Vault directly and consume returned secrets. In other situations, it may make sense to or be desired to separate privileges such that one trusted entity is responsible for interacting with most of the Vault API and passing secrets to the end consumer. However, the more relays a secret travels through, the more possibilities for accidental disclosure especially if the secret is being transmitted in plain text. Keep in mind here that the intended end recipient of the token doesn't have to be human. The recipient can also be a machine, service, or app. To help address this problem, Vault includes a feature called response wrapping. When requested, Vault can take the response it would have sent to a client and instead insert it into a Vault cubbyhole configured itself with a single-use token and then returning that single-use token instead. Logically speaking, the response is wrapped by the token and retrieving it requires an unwrapped operation against the token. Response wrapping provides a secure approach to secret sharing. It does so by performing the following duties. One, it provides cover by ensuring that the value being transmitted across the wire is not the actual secret but a reference to such a secret, namely the response wrapping token. Information stored in logs or captured along the way do not directly see the sensitive information. Two, it provides malfeasance detection by ensuring that only a single party can ever unwrap the token and see what's inside. A client receiving a token that cannot be unwrapped can trigger an immediate security incident. In addition, a client can inspect a given token before unwrapping to ensure that its origin is from the expected location in Vault. And three, it limits the lifetime of secret exposure because the response wrapping token has a lifetime that is separate from the wrapped secret and often can be much shorter. So, if a client fails to unwrap the token, the token can expire very quickly.
In this example, we're using the Vault CLI to generate a wrapping token to be used by our Jenkins build server. The wrapping token is configured to expire after 60 seconds as per the wrap ttl parameter. A trusted entity in possession of the wrapping token can perform a one-time unwrap to gain access to the actual Vault token.
In this example, we're using the Vault CLI to unwrap our wrapping token simply by calling vault unwrap and supplying the wrapping token.
Okay, that completes this lecture on the Cubbyhole Secret Engine. Go ahead and close this lecture and we'll see you shortly in the next one.
About the Author
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.