Technology keeps moving. In just a few years, we’ve gone from servers running on dedicated hardware, through virtualization, and then cloud computing. And now we’ve reached the container age. As we will see, the Amazon EC2 Container Service (ECS) has made containers a major element of their deployment family. We’ll soon discuss three ways to run Docker containers on AWS, but first a few words about why you would want to.
Traditional servers living in data centers could do one thing at a time. There would often have been one server for Active Directory, another for Exchange, and a third running a database. But it was common for any or all of these servers to have unused excess processing capacity going to waste. Virtualization was developed to make use of that extra capacity. Through virtualization, a single physical server could host multiple virtual servers, allowing for far greater scalability and more efficient resource utilization.
But even virtualization required significant up-front hardware (and software license) investments and couldn’t completely remove the risks of excess capacity. That’s where cloud computing – with its multi-tenancy models, on-demand pricing, and dynamic scalability – proved so powerful.
But modern DevOps practices demand the ability to quickly build servers and ship code to be run in different environments. Welcome to the world of containers: extremely lightweight, abstracted user-space instances that can be easily launched on any compatible server and reliably provide a predictable experience. Containers – of which the best-known example is Docker – are able to achieve all this by capitalizing on the ability to share the Linux kernel of any host.
The DevOps development process depends, in large part, on portability. Even a slight difference between your development, test, and production environments may completely break your application. Traditional development models follow a change management process to solve these kinds of problems. But this process won’t fit in today’s rapid build and deploy cycles. Docker streamlines and automates the software development process. You can simply package an application with references to all its dependencies into standardized containers (defined by plain-text Dockerfiles) that include everything needed to run your application wherever you end up shipping it.
Docker uses a client-server architecture. The docker client talks to a host daemon which both builds and then runs the container.
In this post, we won’t talk all that much about running Docker natively. Instead, we will concentrate on AWS deployments. We’ll explore three ways to do that:
- Deploying Docker containers directly from an Ec2 instance.
- Using Docker containers on Elastic Beanstalk.
- Docker cluster management using the AWS EC2 Container Service.
Deploying Docker containers on an Ec2 instance
Docker can run anywhere, on a racked server, an old laptop, and perhaps, if you worked at it hard enough, even on a smartphone. So, since it requires only the Linux kernel to run, there’s definitely no reason why it shouldn’t work on a Linux-powered Amazon EC2 instance. Let’s see how we can install and run Docker on Amazon Linux.
- Launch an Amazon Linux Instance.
- Install the docker engine.
sudo yum install -y docker
- SSH into your EC2 instance and start the Docker service.
sudo service docker start
- Sign up for a Docker Hub Account.
- Search for Ubuntu OS in Docker Hub.
sudo docker search ubuntu
- Download the ubuntu image.
sudo docker pull
- start the container. The
-iflags allocate a pseudo-tty and keep stdin open.
sudo docker run -t -i ubuntu /bin/bash
- At the regular BASH prompt (not in the Docker shell), install the Apache webserver.
apt-get update && apt-get install apache2
- Save the container. We need to commit these changes to our container using the container ID and the image name.
docker commit a2d424f5655ea nitheesh86/apache
- Push the changes to docker hub.
docker push nitheesh86/apache
At this point, sharing the container with others is easy. Just run this command on any other server, anywhere on the Internet:
docker run -d -p 80:80 nitheesh86/apache /usr/sbin/apache2ctl -D FOREGROUND
Using Docker containers on Elastic Beanstalk
Elastic Beanstalk is an AWS PaaS platform. All you have to do is upload your application code, and Elastic Beanstalk takes care of the deployment, load balancing, and capacity provisioning. With a single click, you can start all the necessary application servers running. There is no charge for the Elastic Beanstalk service itself, just for the AWS resources you actually use.
Since AWS integrated Docker into Elastic Beanstalk, you can build and test your application on your local workstation, and then directly deploy it on Elastic Beanstalk. As always, you install all the required software and dependencies through the Dockerfile. This file will include the image to be used (Ubuntu, CentOS, etc), and the volume to be mapped. Any new EC2 instance Elastic Beanstalk will launch to run your application will be configured based on the commands passed by your Dockerfile.
Let’s deploy a sample application on a Docker container
- Create a Dockerrun.aws.json file to deploy an existing Docker image. This file describes how to deploy a Docker container as an Elastic Beanstalk application. Your dockerrun.aws.json file should look something like this:
Let me explain this, line by line.
- AWSEBDockerrunVersion – Specifies the version number as the value “1” for Single Container Docker environments.
- Image – Specifies the Docker base image on an existing Docker repository from which you’re building your Docker container.
- Ports – Lists the ports to expose on the Docker container.
- Volumes – Maps volumes from an EC2 instance to your Docker container.
- Logging – Maps the log directory inside the container.
- Create a zip file including Dockerfile or Dockerrun.aws.json and any application file, then upload to Elastic Beanstalk.
- From there you can follow this guide deploy your application.
Docker cluster management using the EC2 Container service (ECS)
When your container grows from one to many, maintenance can become a significant burden. Managing containers at scale requires proper cluster management software. Since “Amazon” and “scale” are like two sides of the same coin, it wasn’t much of a surprise when they created the AWS EC2 Container Service to handle the installation, operation, scaling and general cluster management for you.
Container instances communicate with the ECS service through an “ECS agent.” When you create a new cluster, an Amazon ECS-enabled AMI will be copied into your cluster.
EC2 Container Service Benefits
- Simplified Cluster Management
ECS creates and manages clusters of Docker Containers. It can scale from one to thousands of containers, spread across multiple Availability Zones.
- Easy Scheduling
ECS supports its own scheduler to spread containers out across your cluster, to balance availability and utilization.
ECS uses the standard Docker daemon so that you can easily move your on-premises application to the AWS cloud and vice versa.
- Resource Utilization
A containerized application can make very efficient use of resources. You can choose to run multiple containers on the same EC2 instance.
- Integrated with AWS services
You can access most AWS features, such as Elastic IP addresses VPC, ELB, and EBS.
Defining Amazon EC2 Container Service Building Blocks
A group of EC2 instances managed by ECS that lives inside your VPC. One cluster can contain multiple instance types and sizes and can be stretched across multiple Availability Zones.
An integral part of each cluster. The scheduler is responsible for assigning containers to instances.
The virtual machines that execute your application. You can run any number of containers in a single cluster.
- Task Definition
Blue prints for your application that define the way your containers will work.
- ECS AMI
An Amazon Machine Image that includes the ECS Agent.
Launch a simple EC2 Container Service container
Create a cluster
Create task definitions. To get used to the process the first time through, I would suggest you stick with the sample task definition that’s provided.
Create a task definition from the sample. You have the option of modifying the parameters (CPU resources or change the port mappings) in the task definition.
You can schedule a task to run once (which is ideal for batch jobs) or create a service to launch and maintain a specified number of copies of the task definition in your cluster. So that the sample application we are running will run continuously, let’s choose “Create a Service.”
Here’s the final step to launch our containers. The more instances you have in our cluster, the more tasks we can give them. Click Review, and Launch.
Click on your cluster and at the bottom, you’ll see an external link for your application.
When you click on that link you should be able to see your sample application running in the browser.