How to Deploy Docker Containers on AWS Elastic Beanstalk Applications

In this post, we are going to look at how to deploy two Docker containers on AWS Elastic Beanstalk Applications. 

Today, Docker containers are being used by many companies in sophisticated microservice infrastructures. From a developer point of view, one of the biggest benefits of Docker is its ability to package our code into reusable images. This assures us that our code will work as expected wherever it will be run. Development is also made easier and faster using Docker. Project dependencies are installed into isolated containers without the need to install them on our local machine. This allows us to develop applications with different requirements in a secure and isolated way.

Too often, developers are stuck trying to find an easy way to run their containers in production. With so many different technologies available, choosing among them isn’t an easy choice. Topics like high availability, scalability, fault tolerance, monitoring, and logging should always be part of a solid production environment. However, without enough knowledge, it may be difficult to achieve them using containerized applications.

Docker containers on AWS Elastic Beanstalk

AWS Elastic Beanstalk is a service for quickly deploying and scaling applications in the Amazon cloud. This includes services developed with Java, .NET, PHP, Python, Ruby, and Docker. Its support for Docker containers makes it an excellent choice for deploying Dockerized applications into solid production environments that are easy to manage and update.

Today, we’ll use a very practical example to show how easy it is to deploy Dockerized applications. The code we will use is available in this Github repo. Feel free to clone it locally to follow along with us.

The scenario

Our scenario is a basic web application with a single “Hello World” API endpoint served by a proxy server. We are going to implement it with two containers. The first container runs a Flask application with uWSGI, and the second container runs Nginx as a reverse proxy.

Local environment

We want to start by declaring our project dependencies in a file called requirements.txt:

Flask==0.12
uWSGI==2.0.14

Our desired application behavior can be easily implemented with Flask creating a file called main.py with this content:

from flask import Flask, jsonify
app = Flask(__name__)
@app.route('/')
def index():
    return jsonify('Hello World!')
if __name__ == '__main__':
    app.run(port=9000)

To locally spin up our application with uWSGI, we can execute:

uwsgi --socket 0.0.0.0:9000 --protocol http -w main:app

Our application is ready. Now, we can define how to containerize it by defining the following  Dockerfile:

FROM python:2.7
EXPOSE 9000
COPY requirements.txt /
COPY main.py /
RUN ["pip", "install", "-r", "requirements.txt"]
CMD ["uwsgi", "--socket", "0.0.0.0:9000", "--protocol", "http", "-w", "main:app"]

To build it, simply execute the command “docker build -t server .“. Docker will build an image called “server” with our code. Once it is complete,  we can start a container from it by executing the command “docker run -d -p 9000:9000 –name server server“. If we open a browser to http://127.0.0.1:9000/ we will see our “Hello World” page.

Next, we should add a second container that runs Nginx to use it as a reverse proxy to our web server container running uWSGI. Start creating this configuration file called default.conf  inside another folder:

server {
  listen 80;
  location / {
    proxy_pass http://server:9000;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
  }
}

This can now be packed into an image with this simple Dockerfile placed in the same folder of default.conf:

FROM nginx:latest
COPY default.conf /etc/nginx/conf.d/
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

We can build the image for our proxy container by executing the command “docker build -t proxy”. Now, we are able to start a new container from our image by executing “docker run -d -p 8080:80 –link server:server –name proxy  proxy“.  If we open our browser to http://127.0.0.1:8080/ we will see that our request is proxied from the proxy container by running Nginx through the application running uWSGI.

Now that we have achieved our goal locally, it’s time to replicate the same situation in a production environment.

Production environment

To start, we should store our images in a secure Docker repository. If we want a private and cost-effective repository within our AWS account, we can use AWS Elastic Container Registry (ECR). Otherwise, we can simply push our images to the docker hub. Using ECR is simple and fast, and we just need to log into the AWS console, select ECS, and then create two new repositories for our images. ECR will provide us with instructions for pushing our local images.

Before going into production, the last thing we need to do is to define a configuration file to inform Elastic Beanstalk about how to use our images. This can be done by creating a file called Dockerrun.aws.json:

{
  "AWSEBDockerrunVersion": 2,
  "containerDefinitions": [
    {
      "name": "server",
      "image": "lzac/eb-docker-server",
      "essential": true,
      "memory": 200,
      "cpu": 1
    },
    {
      "name": "proxy",
      "image": "lzac/eb-docker-proxy",
      "essential": true,
      "memory": 200,
      "cpu": 1,
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 80,
          "protocol": "tcp"
        }
      ],
      "links": [
        "server:server"
      ]
    }
  ]
}

As you can see, we are defining the same situation that we had locally: we need one container running the application server and another one running the proxy. They are linked together using the standard Docker linking pattern. For the purposes of this post, images are publicly stored on Docker Hub. Modify their URL if you are following the steps and you if pushed them to ECR or any different Docker registry.

Everything we need to run our containers in production is ready. From the AWS console, go to Elastic Beanstalk and start creating a new application. The only required information that we need to provide is the following:

  • Application name: Create your own name.
  • Application type: Select “Multi-container Docker.”
  • Application code: Select “Upload your code” and upload the Dockerrun.aws.json file.

Next, click on “Configure more options” and modify the “Capacity” section by selecting “Load Balanced” as environment type using from two to four instances in any availability zone.

Now, click “Create app.” Elastic Beanstalk will start provisioning every resource needed to run our code within the AWS cloud. It will create:

  • S3 bucket
  • Security group
  • Load balancer
  • Auto Scaling
  • CloudWatch metrics and alarms
  • EC2 instances
  • ECS cluster

Through these services, Elastic Beanstalk is automatically configuring many of the features that should always be part of best practices in any production environment. Provisioned EC2 instances will have Docker pre-installed. Once initiated, they will pull both images from the repository and start the required containers linked as defined in our configuration file. Once deployed, we are able to see the correct output of our application by opening a browser to the public URL provided by AWS Elastic Beanstalk.

We can now make use of the powerful Elastic Beanstalk configuration panel to modify a variety of settings for our application in just a few minutes. Elastic Beanstalk will transparently apply them for us.

AWS Elastic Beanstalk console

Let’s take a quick look at each section of the Elastic Beanstalk console to see what we can do:

  • Scaling: Our environment can be changed in a single instance, or we can easily modify the number of instances needed to run our application and which triggers should be used to scale-up or down the number of instances. This section allows us to easily set up the horizontal scaling that we want.
  • Instances: We can modify the type of instances, the key pair to SSH inside them, and the IAM role they have to assume. This is vertical scaling made simple!
  • Notifications: Only a single field: Add an email address that will receive Elastic Beanstalk event notifications via email.
  • Software configuration: We can select the instance profile for our application, enable S3 log file rotation, and configure the AWS CloudWatch alarms for our application. In the last part of the page, we can add environment variables that will be passed to our container. And, we can make use of this capability to securely store secrets and credentials that should not be stored in our source code.
  • Updates and deployments: This section gives us the chance to define how new deployments should be managed by Elastic Beanstalk. We really have a lot of options here. If you would like to go deeper into this topic, check out the official AWS documentation here.
  • Health: AWS Elastic Beanstalk makes use of a health check URL to find out if the application is running well on each instance. This is useful for stopping instances that are not working as expected and starting new ones if needed. We can set this URL in this section, define a service role for the monitoring system, and modify the health reporting for our environment.
  • Managed updates: If we want to perform periodical platform updates, this is the section to use. We can define when and for which level that updates should be applied to our system.
  • Load balancing: We can modify the load balancer of our stack in this section. For example, if we want to setup HTTPS for our application, we can easily do so here.
  • VPC: In this section, we can easily define the availability zones where our resources should be placed. We can also define whether or not a public IP address should be used and the visibility of our load balancer.
  • Data tier: If the application makes use of an RDS database, we can use this section to define it.

To sum up, we’ve taken a look at how to deploy Docker containers on AWS Elastic Beanstalk applications. As you can see, AWS Elastic Beanstalk is very easy to use and it is an excellent solution for deploying Docker containers on the AWS cloud. Developers are not forced to learn any new technologies and deployments can be easily made without particular operations knowledge/experience.

Cloud Academy