What is HashiCorp Vault? How to Secure Secrets Inside Microservices

Whether you are a developer or a system administrator, you will have to manage the issue of sharing “secrets” or secure information. In this context, a secret is any sensitive information that should be protected. For example, if lost or stolen,  your passwords, database credentials, or cloud provider keys could damage your business. Safe storage and sharing for this information are becoming more difficult with modern complex infrastructures. In today’s post, we’re going to explore how to get started with HashiCorp and how secure information can be managed in a microservice, Docker-based environment using HashiCorp Vault.

The drawbacks of common approaches

To deal with the problem of managing secure information, developers and sysadmins can choose from a few common approaches:

  • Stored in the image: While this approach is easy to achieve, it’s one that should be avoided in any production environment. Secrets are accessible by anyone who has access to the image and because they will persist in the previous layers of the image, they cannot be deleted.
  • Environment variables: When starting up our containers, we can easily set the environment variables using the -e Docker run parameter. This approach is much better than the previous one but it still has some drawbacks. For example, a common security gap is that secrets could appear in debug logs.
  • Secrets mounted in volumes: We can create a file that stores our secrets and then mount it at container startup. This is easily done and probably better than the previous approaches. However, it still has some limitations. With this approach, it becomes difficult to manage in infrastructures with a large number of running containers where each container only needs a small subset of secrets.

In addition to the cons mentioned above, all of these approaches share some common problems, including:

  • Secrets are not managed by a single source. In complex infrastructures, this is a big problem and ideally, we want to manage and store all of our secrets from a single source.
  • If secrets have an expiration time, we will be required to perform some manual actions to refresh them.
  • We cannot share just a subset of our credentials to specific users or services.
  • We do not have any audit logs to track who requested a particular secret and when, or any logs for failed requests. These are things that we should be aware of since they could represent potential external attacks.
  • Even if we find an external attack, we don’t have an easy way to perform a break-glass procedure to stop secrets from being shared with external services or users.

All of the above problems can be easily mitigated and managed using a dedicated tool such as HashiCorp Vault. This makes particular sense in a microservice environment where we want to manage secrets from a single service and expose them as a service to any allowed service or user.

What is HashiCorp Vault? 

From the official Vault documentation:

Vault secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing. Vault handles leasing, key revocation, key rolling, and auditing. Through a unified API, users can access an encrypted Key/Value store and network encryption-as-a-service, or generate AWS IAM/STS credentials, SQL/NoSQL databases, X.509 certificates, SSH credentials, and more.

Using Vault, we can delegate the management of our secrets to a single tool. Vault will take care of the at rest and in transit encryption of each secret. It has built-in support for several authenticationsstorage, and audit backends, and it was built with high availability in mind. Vault also makes it easy to set up multi-datacenter replication.

Get started with HashiCorp Vault

Vault makes use of a storage backend to securely store and persist encrypted secrets. In today’s example, we’ll use the PostgreSQL backend. We will begin by starting a container named vault-storage-backend from the official PostgreSQL image with vault as database name, username, and password:

$ docker run -d -e POSTGRES_PASSWORD=vault -e POSTGRES_USER=vault -e POSTGRES_DB=vault --name vault-storage-backend postgres

Since Vault’s PostgresSQL storage backend will not automatically create anything once set up, we need to execute some simple SQL queries to create the required schema and indexes.

Let’s connect to the Docker container and open a PSQL session:

$ docker exec -it vault-storage-backend bash
$ su - postgres
$ psql vault

Required schema and indexes can be easily created by executing the following SQL statements:

CREATE TABLE vault_kv_store (
    parent_path TEXT COLLATE "C" NOT NULL,
    path        TEXT COLLATE "C",
    key         TEXT COLLATE "C",
    value       BYTEA,
    CONSTRAINT pkey PRIMARY KEY (path, key)
CREATE INDEX parent_path_idx ON vault_kv_store (parent_path);

We don’t need to do anything else inside the PostgreSQL container so we can close the session and go back to the host terminal.
Now that PostgreSQL is properly configured, we need to create a configuration file to inform Vault that its storage backend will be the Vault database inside the vault-storage-backend container. Let’s do that by defining the following configuration file named config.hcl.

# config.hcl
  "backend": {"postgresql": {"connection_url": "postgres://vault:vault@storage-backend:5432/vault?sslmode=disable"}},
  "listener": {"tcp": {"address": "", "tls_disable": 1}}

Using Vault we can make use of the Access Control Policies – ACLs – to define different policies to allow or deny access to specific secrets. Before proceeding, let’s define a simple file that will be used to allow read-only access to each secret contained inside secret/web path to any authenticated user or service that will be identified as part of that policy:

# web-policy.hcl
path "secret/web/*" {
  policy = "read"

Both files will be stored inside a Docker data container to be easily accessible from other linked containers. Let’s create the container by executing:

$ docker create -v /config -v /policies --name vault-config busybox

Next, we will copy both of the files inside it:

$ docker cp config.hcl vault-config:/config/
$ docker cp web-policy.hcl vault-config:/policies/

Since we want to make use of Vault’s auditing capabilities and we want to make logs persistent, we will store them in a local folder on the host and then mount it in Vault’s container. Let’s create the local folder:

mkdir logs

Finally, we can start our Vault server by launching a container named vault-server:

docker run \
  -d \
  -p 8200:8200 \
  --cap-add=IPC_LOCK \
  --link vault-storage-backend:storage-backend  \
  --volumes-from vault-config \
  -v $(pwd)/logs:/vault/logs \
  --name vault-server \
  vault server -config=/config/config.hcl

As you can see, we are using the official Vault image available on the Docker hub. Vault is running on port 8200 inside the container and that port is exposed on port 8200 of the localhost. The PostgreSQL container is linked and named storage-backend inside the container, which is the same alias used in the configuration file config.hcl. Volumes are mounted from the data-container named vault-config and the localhost’s logs folder is mounted at /vault/logs/ inside the container. Finally, we have started Vault using the configuration defined inside the config.hcl configuration file.

To interact from the localhost to Vault we can define an alias:

$ alias vault='docker exec -it vault-server vault "$@"'
$ export VAULT_ADDR=

We can then initialize Vault by executing:

$ vault init -address=${VAULT_ADDR}

We will receive an output similar to the following:

Unseal Key 1: QZdnKsOyGXaWoB2viLBBWLlIpU+tQrQy49D+Mq24/V0B
Unseal Key 2: 1pxViFucRZDJ+kpXAeefepdmLwU6QpsFZwseOIPqaPAC
Unseal Key 3: bw+yIvxrXR5k8VoLqS5NGW4bjuZym2usm/PvCAaMh8UD
Unseal Key 4: o40xl6lcQo8+DgTQ0QJxkw0BgS5n6XHNtWOgBbt7LKYE
Unseal Key 5: Gh7WPQ6rWgGTBRSMecuj8PR8IM0vMIFkSZtRNT4dw5MF
Initial Root Token: 5b781ff4-eee8-d6a1-ea42-88428a7e8815
Vault initialized with 5 keys and a key threshold of 3. Please
securely distribute the above keys. When the Vault is re-sealed,
restarted, or stopped, you must provide at least 3 of these keys
to unseal it again.
Vault does not store the master key. Without at least 3 keys,
your Vault will remain permanently sealed.

The Vault was successfully initialized and now it is in a sealed state. In order to start interacting with it, we will first need to unseal it.

In the previous output, we can see five different unseal keys. This is because Vault makes use of Shamir’s Secret Sharing. Basically, this means that we will need to provide at least three of the five generated keys to unseal the vault. That’s why each key should be shared with a single person inside your organization/team. In this way, a single malicious person will never be able to access the vault to steal or modify your secrets. The number of generated and required keys can be modified when you initially set up your Vault.

Let’s unseal our vault using three of the provided keys:

vault unseal -address=${VAULT_ADDR} QZdnKsOyGXaWoB2viLBBWLlIpU+tQrQy49D+Mq24/V0B
vault unseal -address=${VAULT_ADDR} bw+yIvxrXR5k8VoLqS5NGW4bjuZym2usm/PvCAaMh8UD
vault unseal -address=${VAULT_ADDR} Gh7WPQ6rWgGTBRSMecuj8PR8IM0vMIFkSZtRNT4dw5MF

The final output will be:

Sealed: false
Key Shares: 5
Key Threshold: 3
Unseal Progress: 0
Unseal Nonce:

This means that the vault has been correctly unsealed and we can finally start interacting with it.

Additionally, to unseal keys, we can find an Initial Root Token key in the previous vault init command output. Authenticating Vault using that token grants us root access to Vault. Let’s authenticate using it:

$ vault auth -address=${VAULT_ADDR} 5b781ff4-eee8-d6a1-ea42-88428a7e8815

The received output will be:

Successfully authenticated! You are now logged in.

First, we need to enable Vault’s audit. To do that, you will need to execute the following:

$ vault audit-enable -address=${VAULT_ADDR} file file_path=/vault/logs/audit.log

From this point forward, every interaction with the Vault will be audited and persisted in a log file inside the logs folder on the localhost.

We can now write and read our first secret:

$ vault write -address=${VAULT_ADDR} secret/hello value=world
$ vault read -address=${VAULT_ADDR} secret/hello

The output will be exactly what we expect:

Key             	Value
---             	-----
refresh_interval	768h0m0s

Next, let’s associate the policy defined in the previous web-policy.hcl file to verify that ACLs are working as expected:

$ vault policy-write -address=${VAULT_ADDR} web-policy /policies/web-policy.hcl

Now we can write a new secret inside secret/web path:

$ vault write -address=${VAULT_ADDR} secret/web/web-apps db_password='password'

Vault has built-in support for many different authentication systems. For example, we can authenticate users using LDAP or GitHub. We want to keep things simple here, so we will make use of the Username & Password authentication backend. We first need to enable it:

$ vault auth-enable -address=${VAULT_ADDR} userpass

Next, let’s create a new user associated to the policy web-policy and with web as username and password:

$ vault write -address=${VAULT_ADDR} auth/userpass/users/web password=web policies=web-policy

Let’s authenticate this new user to Vault:

$ vault auth -address=${VAULT_ADDR} -method=userpass username=web password=web

Vault informs us that we have correctly authenticated to it, and since the policy associated to the user has read-only access to the secret/web path, we are able to read the secrets inside that path by executing:

$ vault read -address=${VAULT_ADDR} secret/web/web-apps

However, if we try to execute:

$ vault read -address=${VAULT_ADDR} secret/hello

We will receive the following:

Error reading secret/hello: Error making API request.
Code: 403. Errors:
* permission denied

This means that Vault’s ACLs checks are working fine. We can also see the denied request from the audit logs by executing:

tail -f logs/audit.log

In fact, in the output we will see:

   "error":"permission denied"

In this scenario, we could easily integrate external services such as AWS CloudWatch and AWS Lambda to revoke access to users or completely seal the vault.

For example, if we would like to revoke the access to web user we could execute:

$ vault token-revoke -address=${VAULT_ADDR} -mode=path auth/userpass/users/web

Or if we would like to completely seal the vault, we can execute:

$ vault seal -address=${VAULT_ADDR}

Let’s now imagine that we have an external service running on a different container that needs access to some secrets stored with Vault. Let’s start a container from the official Python image and directly attach to its Bash.

docker run -it --link vault-server:vault-server python bash

To programmatically interact with Vault we first need to install the official Python client for Vault, called hvac.

$ pip install hvac

Let’s now try to access to some secrets from this new container via Vault:

import hvac
client = hvac.Client(url='http://vault-server:8200')
# We authenticate to Vault as web user
client.auth_userpass('web', 'web')
# This will work
# This will not work since the authenticated user is associated with the ACLs web-policy


Today we have seen how secrets can be delegated to a single point of access and management using HashiCorp Vault and how it can be set up in a microservice, container-based environment. We have only scratched the surface of Vault’s features and capabilities.

To get started with the HashiCorp Vault course, sign in to your Cloud Academy account. I also absolutely recommend spending some time with the official Getting started guide to go deeper into Vault’s concepts and functionalities.

Written by

Luca Zacchetti

Luca has several years of experience as developer working for different IT companies with a strong focus in Python, Linux and PostgreSQL. He loves writing simple, clean and pragmatic code to solve complex problems and has a deep passion for DevOps tools and strategies.

Related Posts

— February 7, 2019

Measuring DevOps Success: What, Where, and How

The DevOps methodology relates technical and organization practices so it's difficult to simply ascribe a number and say "our organization is a B+ on DevOps!" Things don't work that way. A better approach identifies intended outcomes and measurable characteristics for each outcome. Let'...

Read more
  • DevOps
— February 5, 2019

2019 DevOps and Automation Predictions

2019 DevOps and Automation PredictionsWe recently released our 2019 predictions for cloud computing and are doing the same here for DevOps and automation predictions.2018 was a great year for software, and DevOps falls somewhere on the slope of enlightenment on the Gartner Hype Cy...

Read more
  • DevOps
— January 17, 2019

Testing Through the Deployment Pipeline

Automated deployment pipelines empower teams to ship better software faster. The best pipelines do more than deploy software; they also ensure the entire system is regression-free. Our deployment pipelines must keep up with the shifting realities in software architecture. Applications a...

Read more
  • DevOps
— December 27, 2018

DevOps and Agile: Understanding the Relationship

Agile development used to be front and center in the conversation about software development. Now, DevOps has taken over the conversation. How do agile and DevOps relate? Both ideas began as ways to improve different aspects of software development. Agile embraced the changing nature of...

Read more
  • DevOps
— December 12, 2018

Getting Started With Site Reliability Engineering

Much has been written and discussed about SRE (Site Reliability Engineering) from what it is, how to do it, and how it's the same (or different) as DevOps. Google coined the term, defined the profession, and wrote the book on it. Their "Site Reliability Engineering" book covers the idea...

Read more
  • DevOps
  • SRE
— December 6, 2018

What DevOps Means for Risk Management

What Does DevOps Mean for Risk Management?Adopting DevOps makes the unfamiliar uneasy in two areas. One, they see an inherently risky choice between speed and quality and second, they are concerned that the quick iterations of DevOps may break compliance rules or introduce security vu...

Read more
  • DevOps
— October 25, 2018

How DevOps Transforms Software Testing

Testing is arguably the most important aspect of software development. Whether manual or automated, testing ensures the software works as expected. Broken software causes production outages, unsatisfied customers, refunds, decreased trust, or even complete financial collapse. Testing mi...

Read more
  • DevOps
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless
Albert Qian
— August 6, 2018

Four Tactics for Cultural Change in DevOps Adoption

Many organizations approach digital transformation and DevOps adoption with the belief that simply by selecting and using the right tools, they will achieve higher levels of automation and gain massive efficiencies as a result. While DevOps adoption does require new tools and processes,...

Read more
  • DevOps
— July 24, 2018

Get Started with HashiCorp Vault

Ongoing threats of data breaches and cyber attacks remain top of mind for every team responsible for securing cloud workloads and applications, especially with the challenge of managing secrets including passwords, tokens, API keys, certificates, and more. Complexity is especially notab...

Read more
  • DevOps
  • HashiCorp Vault
— June 11, 2018

Open Source Software Security Risks and Best Practices

Enterprises are leveraging a variety of open source products including operating systems, code libraries, software, and applications for a range of business use cases. While using open source comes with cost, flexibility, and speed advantages, it can also pose some unique security chall...

Read more
  • DevOps
— June 5, 2018

What is Static Analysis Within CI/CD Pipelines?

Thanks to DevOps practices, enterprise IT is faster and more agile. Automation in the form of automated builds, tests, and releases plays a significant role in achieving those benefits and creates the foundation for Continuous Integration/Continuous Deployment (CI/CD) pipelines. However...

Read more
  • DevOps