Amazon EC2 Container Service and Elastic Beanstalk: Docker on AWS

Technology keeps moving. In just a few years, we’ve gone from servers running on dedicated hardware, through virtualization, and then cloud computing. And now we’ve reached the container age. As we will see, the Amazon EC2 Container Service (ECS) has made containers a major element of their deployment family. We’ll soon discuss three ways to run Docker containers on AWS, but first a few words about why you would want to.

Traditional servers living in data centers could do one thing at a time. There would often have been one server for Active Directory, another for Exchange, and a third running a database. But it was common for any or all of these servers to have unused excess processing capacity going to waste. Virtualization was developed to make use of that extra capacity. Through virtualization, a single physical server could host multiple virtual servers, allowing for far greater scalability and more efficient resource utilization.

But even virtualization required significant up-front hardware (and software license) investments and couldn’t completely remove the risks of excess capacity. That’s where cloud computing – with its multi-tenancy models, on-demand pricing, and dynamic scalability – proved so powerful.

But modern DevOps practices demand the ability to quickly build servers and ship code to be run in different environments. Welcome to the world of containers: extremely lightweight, abstracted user-space instances that can be easily launched on any compatible server and reliably provide a predictable experience. Containers – of which the best-known example is Docker – are able to achieve all this by capitalizing on the ability to share the Linux kernel of any host.

Docker

The DevOps development process depends, in large part, on portability. Even a slight difference between your development, test, and production environments may completely break your application. Traditional development models follow a change management process to solve these kinds of problems. But this process won’t fit in today’s rapid build and deploy cycles. Docker streamlines and automates the software development process. You can simply package an application with references to all its dependencies into standardized containers (defined by plain-text Dockerfiles) that include everything needed to run your application wherever you end up shipping it.

Docker uses a client-server architecture. The docker client talks to a host daemon which both builds and then runs the container.

In this post, we won’t talk all that much about running Docker natively. Instead, we will concentrate on AWS deployments. We’ll explore three ways to do that:

  1. Deploying Docker containers directly from an Ec2 instance.
  2. Using Docker containers on Elastic Beanstalk.
  3. Docker cluster management using the AWS EC2 Container Service.

Deploying Docker containers on an Ec2 instance

Docker can run anywhere, on a racked server, an old laptop, and perhaps, if you worked at it hard enough, even on a smartphone. So, since it requires only the Linux kernel to run, there’s definitely no reason why it shouldn’t work on a Linux-powered Amazon EC2 instance. Let’s see how we can install and run Docker on Amazon Linux.

    1. Launch an Amazon Linux Instance.
    2. Install the docker engine.
      sudo yum install -y docker
    3. SSH into your EC2 instance and start the Docker service.
      sudo service docker start
    4. Sign up for a Docker Hub Account.
    5. Search for Ubuntu OS in Docker Hub.
      sudo docker search ubuntu
    6. Download the ubuntu image.
      sudo docker pull
    7. start the container. The -t and -i flags allocate a pseudo-tty and keep stdin open.
      sudo docker run -t -i ubuntu /bin/bash
    8. At the regular BASH prompt (not in the Docker shell), install the Apache webserver.
      apt-get update && apt-get install apache2
    9. Save the container. We need to commit these changes to our container using the container ID and the image name.
      docker commit a2d424f5655ea nitheesh86/apache
    10. Push the changes to docker hub.
      docker push nitheesh86/apache

At this point, sharing the container with others is easy. Just run this command on any other server, anywhere on the Internet:

docker run -d -p 80:80 nitheesh86/apache /usr/sbin/apache2ctl -D FOREGROUND

Using Docker containers on Elastic Beanstalk

Elastic Beanstalk is an AWS PaaS platform. All you have to do is upload your application code, and Elastic Beanstalk takes care of the deployment, load balancing, and capacity provisioning. With a single click, you can start all the necessary application servers running. There is no charge for the Elastic Beanstalk service itself, just for the AWS resources you actually use.

EC2 Container Service Step-by-Step Update Process
With a single click, you can start all the necessary application servers running.

Since AWS integrated Docker into Elastic Beanstalk, you can build and test your application on your local workstation, and then directly deploy it on Elastic Beanstalk. As always, you install all the required software and dependencies through the Dockerfile. This file will include the image to be used (Ubuntu, CentOS, etc), and the volume to be mapped. Any new EC2 instance Elastic Beanstalk will launch to run your application will be configured based on the commands passed by your Dockerfile.

Let’s deploy a sample application on a Docker container

  1. Create a Dockerrun.aws.json file to deploy an existing Docker image. This file describes how to deploy a Docker container as an Elastic Beanstalk application. Your dockerrun.aws.json file should look something like this:

    Dockerrun.aws.json fileLet me explain this, line by line.

    • AWSEBDockerrunVersion – Specifies the version number as the value “1” for Single Container Docker environments.
    • Image – Specifies the Docker base image on an existing Docker repository from which you’re building your Docker container.
    • Ports – Lists the ports to expose on the Docker container.
    • Volumes – Maps volumes from an EC2 instance to your Docker container.
    • Logging – Maps the log directory inside the container.
  2. Create a zip file including Dockerfile or Dockerrun.aws.json and any application file, then upload to Elastic Beanstalk.
  3. From there you can follow this guide deploy your application.

Docker cluster management using the EC2 Container service (ECS)

When your container grows from one to many, maintenance can become a significant burden. Managing containers at scale requires proper cluster management software. Since “Amazon” and “scale” are like two sides of the same coin, it wasn’t much of a surprise when they created the AWS EC2 Container Service to handle the installation, operation, scaling and general cluster management for you.

Container instances communicate with the ECS service through an “ECS agent.” When you create a new cluster, an Amazon ECS-enabled AMI will be copied into your cluster.

EC2 Container Service Benefits

  • Simplified Cluster Management
    ECS creates and manages clusters of Docker Containers. It can scale from one to thousands of containers, spread across multiple Availability Zones.
  • Easy Scheduling
    ECS supports its own scheduler to spread containers out across your cluster, to balance availability and utilization.
  • Portability
    ECS uses the standard Docker daemon so that you can easily move your on-premises application to the AWS cloud and vice versa.
  • Resource Utilization
    A containerized application can make very efficient use of resources. You can choose to run multiple containers on the same EC2 instance.
  • Integrated with AWS services
    You can access most AWS features, such as Elastic IP addresses VPC, ELB, and EBS.

Defining Amazon EC2 Container Service Building Blocks

Diagram Showing Amazon EC2 Container Service Building Blocks 
  • Cluster
    A group of EC2 instances managed by ECS that lives inside your VPC. One cluster can contain multiple instance types and sizes and can be stretched across multiple Availability Zones.
  • Scheduler
    An integral part of each cluster. The scheduler is responsible for assigning containers to instances.
  • Container
    The virtual machines that execute your application. You can run any number of containers in a single cluster.
  • Task Definition
    Blue prints for your application that define the way your containers will work.
  • ECS AMI
    An Amazon Machine Image that includes the ECS Agent.

Launch a simple EC2 Container Service container

Create a cluster
Create task definitions. To get used to the process the first time through, I would suggest you stick with the sample task definition that’s provided.
Create an Amazon ECS sample task Create a task definition from the sample. You have the option of modifying the parameters (CPU resources or change the port mappings) in the task definition.

EC2 Container Service - Defining Parameters
EC2 Container Service Naming Containers

You can schedule a task to run once (which is ideal for batch jobs) or create a service to launch and maintain a specified number of copies of the task definition in your cluster. So that the sample application we are running will run continuously, let’s choose “Create a Service.”
EC2 Container Service Scheduling Tasks
Here’s the final step to launch our containers. The more instances you have in our cluster, the more tasks we can give them. Click Review, and Launch.
EC2 Container Service Configuring Clusters
Click on your cluster and at the bottom, you’ll see an external link for your application.
EC2 Container Service Cluster External Link
When you click on that link you should be able to see your sample application running in the browser.
EC2 Container Service Sample App Successfully Created

Avatar

Written by

Nitheesh Poojary

My professional IT career began nine years back when I was just out of my college. I worked with a great team as an infrastructure management engineer, managing hundreds of enterprise application servers. I found my passion when I got the opportunity to work with Cloud technologies: I'm addicted to AWS Cloud Services, DevOps engineering, and all the cloud tools and technologies that make engineers' lives easier. Currently, I am working as a Solution Architect in SixNines IT. We are an experienced team of engineers that have helped hundreds of customers move to the cloud responsibly. I have achieved 5 AWS certifications, happily helping fellow engineers across the globe through my blogs and answering questions in various forums.


Related Posts

Alisha Reyes
Alisha Reyes
— December 10, 2019

New Lab Challenges: Push Your Skills to the Next Level

Build hands-on experience using real accounts on AWS, Azure, Google Cloud Platform, and more Meaningful cloud skills require more than book knowledge. Hands-on experience is required to translate knowledge into real-world results. We see this time and time again in studies about how pe...

Read more
  • AWS
  • Azure
  • Google Cloud
  • hands-on
  • labs
Alisha Reyes
Alisha Reyes
— December 5, 2019

New on Cloud Academy: AWS Solution Architect Lab Challenge, Azure Hands-on Labs, Foundation Certificate in Cyber Security, and Much More

Now that Thanksgiving is over and the craziness of Black Friday has died down, it's now time for the busiest season of the year. Whether you're a last-minute shopper or you already have your shopping done, the holidays bring so much more excitement than any other time of year. Since our...

Read more
  • AWS
  • AWS solution architect
  • AZ-203
  • Azure
  • cyber security
  • FCCS
  • Foundation Certificate in Cyber Security
  • Google Cloud Platform
  • Kubernetes
Avatar
Cloud Academy Team
— December 4, 2019

Understanding Enterprise Cloud Migration

What is enterprise cloud migration? Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale. Why d...

Read more
  • AWS
  • Azure
  • Data Migration
Wendy Dessler
Wendy Dessler
— November 27, 2019

6 Reasons Why You Should Get an AWS Certification This Year

In the past decade, the rise of cloud computing has been undeniable. Businesses of all sizes are moving their infrastructure and applications to the cloud. This is partly because the cloud allows businesses and their employees to access important information from just about anywhere. ...

Read more
  • AWS
  • Certifications
  • certified
Avatar
Andrea Colangelo
— November 26, 2019

AWS Regions and Availability Zones: The Simplest Explanation You Will Ever Find Around

The basics of AWS Regions and Availability Zones We’re going to treat this article as a sort of AWS 101 — it’ll be a quick primer on AWS Regions and Availability Zones that will be useful for understanding the basics of how AWS infrastructure is organized. We’ll define each section,...

Read more
  • AWS
Avatar
Dzenan Dzevlan
— November 20, 2019

Application Load Balancer vs. Classic Load Balancer

What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...

Read more
  • ALB
  • Application Load Balancer
  • AWS
  • Elastic Load Balancer
  • ELB
Albert Qian
Albert Qian
— November 13, 2019

Advantages and Disadvantages of Microservices Architecture

What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...

Read more
  • AWS
  • Docker
  • Kubernetes
  • Microservices
Nisar Ahmad
Nisar Ahmad
— November 12, 2019

Kubernetes Services: AWS vs. Azure vs. Google Cloud

Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Kubernetes
Avatar
Stuart Scott
— October 31, 2019

AWS Internet of Things (IoT): The 3 Services You Need to Know

The Internet of Things (IoT) embeds technology into any physical thing to enable never-before-seen levels of connectivity. IoT is revolutionizing industries and creating many new market opportunities. Cloud services play an important role in enabling deployment of IoT solutions that min...

Read more
  • AWS
  • AWS IoT Events
  • AWS IoT SiteWise
  • AWS IoT Things Graph
  • IoT
Avatar
Cloud Academy Team
— October 23, 2019

Which Certifications Should I Get?

As we mentioned in an earlier post, the old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and compan...

Read more
  • AWS
  • Azure
  • Certifications
  • Cloud Computing
  • Google Cloud Platform
Valery Calderón Briz
Valery Calderón Briz
— October 22, 2019

How to Go Serverless Like a Pro

So, no servers? Yeah, I checked and there are definitely no servers. Well...the cloud service providers do need servers to host and run the code, but we don’t have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the arch...

Read more
  • AWS
  • Lambda
  • Serverless
Avatar
Stuart Scott
— October 16, 2019

AWS Security: Bastion Hosts, NAT instances and VPC Peering

Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...

Read more
  • AWS