How to deploy Docker containers on AWS Elastic Beanstalk Applications

In this post, we are going to look at how to deploy two Docker containers on AWS Elastic Beanstalk Applications. 
Today, Docker containers are being used by many companies in sophisticated microservice infrastructures. From a developer point of view, one of the biggest benefits of Docker is its ability to package our code into reusable images. This assures us that our code will work as expected wherever it will be run. Development is also made easier and faster using Docker. Project dependencies are installed into isolated containers without the need to install them on our local machine. This allows us to develop applications with different requirements in a secure and isolated way. 
Too often, developers are stuck trying to find an easy way to run their containers in production. With so many different technologies available, choosing among them isn’t an easy choice. Topics like high availability, scalability, fault tolerance, monitoring, and logging should always be part of a solid production environment. However, without enough knowledge, it may be difficult to achieve them using containerized applications.

Docker containers on AWS Elastic Beanstalk 

AWS Elastic Beanstalk is a service for quickly deploying and scaling applications in the Amazon cloud. This includes services developed with Java, .NET, PHP, Python, Ruby, and Docker. Its support for Docker containers makes it an excellent choice for deploying Dockerized applications into solid production environments that are easy to manage and update.
Today, we’ll use a very practical example to show how easy it is to deploy Dockerized applications. The code we will use is available in this Github repo. Feel free to clone it locally to follow along with us.

The scenario

Our scenario is a basic web application with a single “Hello World” API endpoint served by a proxy server. We are going to implement it with two containers. The first container runs a Flask application with uWSGI, and the second container runs Nginx as a reverse proxy.

Local environment

We want to start by declaring our project dependencies in a file called requirements.txt:

Flask==0.12
uWSGI==2.0.14

Our desired application behavior can be easily implemented with Flask creating a file called main.py with this content:

from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/')
def index():
    return jsonify('Hello World!')
if __name__ == '__main__':
    app.run(port=9000)

To locally spin up our application with uWSGI, we can execute:

uwsgi --socket 0.0.0.0:9000 --protocol http -w main:app

Our application is ready. Now, we can define how to containerize it by defining the following  Dockerfile:

FROM python:2.7
EXPOSE 9000
COPY requirements.txt /
COPY main.py /
RUN ["pip", "install", "-r", "requirements.txt"]
CMD ["uwsgi", "--socket", "0.0.0.0:9000", "--protocol", "http", "-w", "main:app"]

To build it, simply execute the command “docker build -t server .“. Docker will build an image called “server” with our code. Once it is complete,  we can start a container from it by executing the command “docker run -d -p 9000:9000 –name server server“. If we open a browser to http://127.0.0.1:9000/ we will see our “Hello World” page.
Next, we should add a second container that runs Nginx to use it as a reverse proxy to our web server container running uWSGI. Start creating this configuration file called default.conf  inside another folder:

server {
  listen 80;
  location / {
    proxy_pass http://server:9000;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
  }
}

This can now be packed into an image with this simple Dockerfile placed in the same folder of default.conf:

FROM nginx:latest
COPY default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

We can build the image for our proxy container by executing the command “docker build -t proxy”. Now, we are able to start a new container from our image by executing “docker run -d -p 8080:80 –link server:server –name proxy  proxy“.  If we open our browser to http://127.0.0.1:8080/ we will see that our request is proxied from the proxy container by running Nginx through the application running uWSGI.
Now that we have achieved our goal locally, it’s time to replicate the same situation in a production environment.

Production environment

To start, we should store our images in a secure Docker repository. If we want a private and cost-effective repository within our AWS account, we can use AWS Elastic Container Registry (ECR). Otherwise, we can simply push our images to the docker hub. Using ECR is simple and fast, and we just need to log into the AWS console, select ECS, and then create two new repositories for our images. ECR will provide us with the instructions for pushing our local images.
Before going in production, the last thing we need to do is to define a configuration file to inform Elastic Beanstalk about how to use our images. This can be done by creating a file called Dockerrun.aws.json:

{
  "AWSEBDockerrunVersion": 2,
  "containerDefinitions": [
    {
      "name": "server",
      "image": "lzac/eb-docker-server",
      "essential": true,
      "memory": 200,
      "cpu": 1
    },
    {
      "name": "proxy",
      "image": "lzac/eb-docker-proxy",
      "essential": true,
      "memory": 200,
      "cpu": 1,
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 80,
          "protocol": "tcp"
        }
      ],
      "links": [
        "server:server"
      ]
    }
  ]
}

As you can see, we are defining the same situation that we had locally: we need one container running the application server and another one running the proxy. They are linked together using the standard Docker linking pattern. For the purposes of this post, images are publicly stored on Docker Hub. Modify their URL if you are following the steps and you if pushed them to ECR or any different Docker registry.
Everything we need to run our containers in production is ready. From the AWS console, go to Elastic Beanstalk and start creating a new application. The only required information that we need to provide is the following:

  • Application name: Create your own name.
  • Application type: Select “Multi-container Docker.”
  • Application code: Select “Upload your code” and upload the Dockerrun.aws.json file.

Next, click on “Configure more options” and modify the “Capacity” section by selecting “Load Balanced” as environment type using from two to four instances in any availability zone.
Now, click “Create app.” Elastic Beanstalk will start provisioning every resource needed to run our code within the AWS cloud. It will create:

  • S3 bucket
  • Security group
  • Load balancer
  • Auto Scaling
  • CloudWatch metrics and alarms
  • EC2 instances
  • ECS cluster

Through these services, Elastic Beanstalk is automatically configuring many of the features that should always be part of best practices in any production environment. Provisioned EC2 instances will have Docker pre-installed. Once initiated, they will pull both images from the repository and start the required containers linked as defined in our configuration file. Once deployed, we are able to see the correct output of our application by opening a browser to the public URL provided by AWS Elastic Beanstalk. We can now make use of the powerful Elastic Beanstalk configuration panel to modify a variety of settings for our application in just a few minutes. Elastic Beanstalk will transparently apply them for us.

AWS Elastic Beanstalk console

Let’s take a quick look at each section of the Elastic Beanstalk console to see what we can do:

  • Scaling: Our environment can be changed in a single instance, or we can easily modify the number of instances needed to run our application and which triggers should be used to scale-up or down the number of instances. This section allows us to easily set up the horizontal scaling that we want.
  • Instances: We can modify the type of instances, the key pair to SSH inside them, and the IAM role they have to assume. This is vertical scaling made simple!
  • Notifications: Only a single field: Add an email address that will receive Elastic Beanstalk event notifications via email.
  • Software configuration: We can select the instance profile for our application, enable S3 log file rotation, and configure the AWS CloudWatch alarms for our application. In the last part of the page, we can add environment variables that will be passed to our container. And, we can make use of this capability to securely store secrets and credentials that should not be stored in our source code.
  • Updates and deployments: This section gives us the chance to define how new deployments should be managed by Elastic Beanstalk. We really have a lot of options here. If you would like to go deeper into this topic, check out the official AWS documentation here.
  • Health: AWS Elastic Beanstalk makes use of a health check URL to find out if the application is running well on each instance. This is useful for stopping instances that are not working as expected and starting new ones if needed. We can set this URL in this section, define a service role for the monitoring system, and modify the health reporting for our environment.
  • Managed updates: If we want to perform periodical platform updates, this is the section to use. We can define when and for which level that updates should be applied to our system.
  • Load balancing: We can modify the load balancer of our stack in this section. For example, if we want to setup HTTPS for our application, we can easily do so here.
  • VPC: In this section, we can easily define the availability zones where our resources should be placed. We can also define whether or not a public IP address should be used and the visibility of our load balancer.
  • Data tier: If the application makes use of an RDS database, we can use this section to define it.

To sum up, we’ve taken a look at how to deploy Docker containers on AWS Elastic Beanstalk applications. As you can see, AWS Elastic Beanstalk is very easy to use and it is an excellent solution for deploying Docker containers on the AWS cloud. Developers are not forced to learn any new technologies and deployments can be easily made without particular operations knowledge/experience.

Written by

Luca has several years of experience as developer working for different IT companies with a strong focus in Python, Linux and PostgreSQL. He loves writing simple, clean and pragmatic code to solve complex problems and has a deep passion for DevOps tools and strategies.

Related Posts

— March 9, 2018

New on Cloud Academy, March ’18: Machine Learning on AWS and Azure, Docker in Depth, and more

Introduction to Machine Learning on AWSThis is your quick-start guide for building and deploying with Amazon Machine Learning. By the end of this learning path, you will be able to apply supervised and unsupervised learning, ML algorithms, deep learning, and deep neural networks on AW...

Read more
  • Cloud Migration
  • Docker
  • Machine Learning & AI
  • Security
— January 18, 2018

New on Cloud Academy, January ’18: Security, Machine Learning, Containers, and more

LEARNING PATHSIntroduction to KubernetesKubernetes allows you to deploy and manage containers at scale. Created by Google, and now supported by Azure, AWS, and Docker, Kubernetes is the container orchestration platform of choice for many deployments. For teams deploying containeri...

Read more
  • Amazon Machine Learning
  • Docker
  • Security
— November 16, 2017

8 Hands-on Labs to Master Docker in the Enterprise

Docker containers are known for bringing a level of ease and portability to the process of developing and deploying applications. Where developers have embraced them for development and testing, enterprise DevOps professionals consider container technologies like Docker to be a strategi...

Read more
  • Docker
— September 19, 2017

New on Cloud Academy, September '17. Big Data, Security, and Containers

Explore the newest Learning Paths, Courses, and Hands-on Labs on Cloud Academy in September.Learning Paths and CoursesCertified Big Data Specialty on AWS Solving problems and identifying opportunities starts with data. The ability to collect, store, retrieve, and analyze data meanin...

Read more
  • AWS
  • Docker
  • Google Cloud
— September 8, 2017

Mesosphere to Incorporate Kubernetes into DC/OS

The announcement that Mesosphere is going to incorporate Kubernetes into DC/OS has generated a fair amount of buzz in the industry, with the consensus landing largely on the side that this is a sign that Mesosphere is ceding to Google’s open source software. I have a different perspecti...

Read more
  • DevOps
  • Docker
  • Kubernetes
— March 30, 2017

What is HashiCorp Vault? How to Secure Secrets Inside Microservices

Whether you are a developer or a system administrator, you will have to manage the issue of sharing "secrets" or secure information. In this context, a secret is any sensitive information that should be protected. For example, if lost or stolen,  your passwords, database credentials, or...

Read more
  • DevOps
  • Docker
  • HashiCorp Vault
— December 16, 2016

Docker Webinar Part 3: Production & Beyond

Last week, we wrapped up our three-part Docker webinar series. You can watch the Docker Webinar session on the webinars page and find the slides on Speakerdeck. Docker Webinar part one introduced Docker, container technologies, and how to get started in your development environment. It ...

Read more
  • AWS
  • Docker
— November 11, 2016

Docker deployment – Webinar Series Part 2: From Dev to Production

Docker deployment: I recently held Part 2 of a three-part webinar series on Docker. For those of you who could not attend, this post summarizes the main topics that we covered. It also includes some additional items based on the QA session. You can watch part 2  and read the detailed QA...

Read more
  • Cloud Computing
  • Docker
— November 1, 2016

Docker containers Webinar Part 1: how they work, from idea to Dev

The Docker containers Webinar: on October 19, I held Part 1 of a three-part webinar series on Docker. For those of you who could not attend, this post summarizes the webinar material. It also includes some additional items that I've added based on the QA session. Finally, I will highlig...

Read more
  • Cloud Computing
  • Docker
— October 25, 2016

Container technologies: more than just Docker

Container technologies: Docker has gained widespread industry adoption and success since its release in 2014. As more people push to Dockerize everything, it's important to realize that Docker is only the first wave of successful container technology. Here are just some of the reasons w...

Read more
  • Cloud Computing
  • Docker
— September 8, 2016

New Course: Introduction to Docker

Docker has become a mainstay in the DevOps world, and Cloud Academy has released a new course called Introduction to Docker in order to gently introduce you to this incredible technology.We're very excited to release this course because there is an enormous amount of demand for contain...

Read more
  • AWS
  • Docker
— September 24, 2015

SystemTap: Provisioning an AWS EC2-based Docker Instance

In the first article in our SystemTap series, we learned how to install the powerful diagnostic tool, SystemTap, on an AWS EC2 instance and then wrote our very first "Hello World" script. We now need to explore some of the interesting (and more useful) scripts that come with SystemTap....

Read more
  • AWS
  • Docker