EC2 Container Registry/Container Service (ECR/ECS)

Start course
Overview
Difficulty
Beginner
Duration
55m
Students
5633
Description

Compute Fundamentals for AWS offers you an updated introduction to AWS's cornerstone compute services, and provides a foundation as you build your compute skills for AWS. It includes coverage of:

- Amazon Elastic Compute Cloud (EC2)
- Elastic Load Balancers (ELBs)
- Auto Scaling
- Amazon EC2 Container Registry and Services (ECR and ECS)
- AWS Elastic Beanstalk
- AWS Lambda

Do you have questions on this course? Contact our cloud experts in our community forum.

Transcript

The Amazon EC2 Container Service, ECS, is a highly scalable, fast container management services to easily run Docker containers on a cluster of EC2 instances. To be able to follow along with this lesson, in addition to having a computer with internet access and access to an AWS account, you will also need to have installed the AWS CLI and Docker CLI. For instructions on installing Docker on your machine, please refer to the instructions at the Docker website. ECS made its debut at AWS re:Invent 2014, and since then it has undergone a number of changes and been deployed in more regions, with the latest being Singapore and Frankfurt. This introduction will explain what containers are and how containers can be created, managed, and configured. There is no cost for using EC2 Container Services itself, but you will incur charges for EC2 instances and other services, such as ELB. Before we dive into containers, let's look at what Docker is. Docker is a platform to build, deploy, and run applications using Linux containers, LXC. It offers a consistent runtime experience, whether running on a developer machine or a production environment. The Docker engine runs on top of your operating system, which acts as the host operating system. The engine is responsible for ensuring that the running containers use the underlying shared resources efficiently. Containers are made up of images. Images in Docker represent a read-only layer. You can add layers on top of other layers in order to build the system you need. For example, you may have a base image running Debian. You want to run an Apache web server, so you add an Apache layer. Each layer is mounted as read-only and never changes. It'll change as such as the Apache configuration files are recorded to a writeable layer, the rests on top of the read-only images. After you have defined your container, you can launch multiple instances of it at one time. You can move your container to another system and launch multiple instances on that system. All the while, the application experience remains exactly the same. The EC2 Container Service is a Docker compatible, and every EC2 instance that runs your containers hosts the Docker daemon and also must have the ECS agent installed, which for the ECS-optimized AMIs is already installed. This means you can take advantage of Docker repositories, or the private public DockerHub or Amazon EC2 Container Registry by simply adding a reference to the repository in your task definition. No configuration changes are required on your part to get Docker images to work. You will have noticed that I mentioned one of the Docker repositories was Amazon EC2 Container Registry. Amazon EC2 Container Registry was launched at AWS re:Invent 2015. Amazon EC2 Container Registry is a fully managed Docker container registry that makes it easy for developers to store, manage, and deploy Docker container images without having to worry about managing the underlying infrastructure. Amazon EC2 Container Registry is highly secure, and all images are transferred to the registry over HTTPS and are automatically encrypted at REST ns3, and is currently available in U.S. East 1, U.S. West 1, and EU West 1. Now that we have an understanding of what Docker is and the EC2 Container and Registry services, let's do a walkthrough and create our registry and our first container. Before we begin, make sure that you have selected one of the regions that currently supports Amazon ECR, and we will go through with the first run experience which will create the necessary ECS instance and service roles. Before we start, I will quickly run through the configuration that I have, which will be required to complete the following steps. As this is my development machine, I already have the AWS CLI and the Docker CLI client installed on my workstation. If you don't have these installed, you will need to install them first before you can follow along. I've also created a Docker file that we will use for the walkthrough. Now that we have everything we need to start, from the Amazon Container Services dashboard, click on the blue Get Started button, and if you selected a region that supports ECR you will have two options, one to deploy a sample application onto an ECS cluster, and the second option to store container images on ECR. The first thing we want to do is to create an ECR repository and push an image to it which we will then use later. Uncheck the top checkbox and click Continue. In the Configure Registry page, we need to enter a unique name for our registry and click Next Step. On the Build, Tag, and Push Docker Image page, it notifies us that the repository has been successfully created and gives us a series of steps to be able to push our Docker image. First, we need to retrieve the Docker log and command to be able to authenticate our Docker client to the registry by pasting the AWS ECR "Get-Login" command from the console into our terminal window. This will provide the command that we need to enter, so copy this text and paste it onto the command line. What this command will do is provide us with an authorization token that is valid for 12 hours. We then need to build our Docker image, and we need to ensure that we are in the same directory as our Docker file. This step takes a few minutes, so I'll pause the video and come back when it is completed. This has been successfully completed, and the next step we do is to tag our image, and then we push our newly tagged image to our Amazon ECR repository, and click Done to finish. I'm then taken back to the Repositories page, where I can manage my images. At this point, before I can use my image, I need to set permissions and ECR uses AWS Identity and Access Manage to control who can access your container images. To set permissions, click on the Permissions tab, and then click Add. From here, you can specify the permissions, and for the demo purpose I will set this to allow everyone and grant pull only actions. To recap, so far we have created the repository, pushed the image, set permissions. At this point, we are now ready to deploy the image to ECS, but before we can do that we need to have a container instance. To launch the instance, we will switch back to the AWS Management Console, and from the EC2 dashboard select Launch Instance, and from the left-hand menu select Community AMIs. In the search field enter, "Amazon-ECS-Optimized", which will display a list of preconfigured ECS-optimized AMIs. Click Select on the top listed instance, and we will select a t2 micro, and then click Next, Configure Instance Details. For the IAM role, select ECS Instance Role from the drop-down menu, and make sure that you have a security group rule that will allow ports, which the container will use to be open, in this case HTTP Port 80, and complete the steps to launch your instance. We will now create a task definition, which is what we need before we can run any task. For this, we are going to go to the EC2 Container Service dashboard and select Task Definitions, and then click Create New Task Definition. For the name, we will call it "Demo-Web", and then we will add our container definition by clicking on Add Container. We will enter a container name of "Demo-Web-Container", and this name can be used in more complex environments to actually link containers together. Next, we will specify that we want to pull this from our Amazon EC2 repository in the image field. If you forgot the full name, there are a couple of ways to retrieve this. From the console, push the Up key to see the last command you ran. If you've closed the terminal, you can use the CLI and type in "AWS ECR Get-Login", and then "-- Region", and the region name. Or from the AWS Console, click on Repositories, and when you create new it lists the value there. Once you have entered this, then enter "300" for maximum memory, and for port mappings, hosting container set to Port 80. We will then specify that we want to use 50 CPU units. Note that there are 1,024 CPU units per core. You can adjust these values accordingly, but you need to take into account the size of the EC2 instance that you have deployed and set Essential Equals to True. What this means is that if this container fails, it will stop the entire task. Click Add, and then Create. There are a number of other configuration settings, and these will be covered in the intermediate through advanced courses. If we click on the Actions drop-down, we have the option to run task or create service. To determine which one to use is based on the function of the container that we have created. Running tasks is ideally suited to process as such as batch jobs that perform work, and then stop, whereas service is suited for long-running stateless services and applications. For the purpose of this demonstration, we are going to run this as a task, and we will click on Run Task. Here, you can specify how many tasks you want to run. We will leave it as a single task and click Run Task again. The status of the task will change from pending to running, and if we then click on the task link and expand the task name under container, you will see the IP address and port that we mapped during the task definition. When we click on that, it will take us to the default landing page for the web server. You have now seen how to use EC2 Container Services and EC2 Container Registry, and there is a lot more to cover, which will be included in the intermediate and advanced courses that will show detailed configurations for Docker, as well as how to extend the service to take advantage of third-party schedulers such as Mesos.

About the Author
Students
17723
Courses
1
Learning Paths
2

David's acknowledged hands on experience in the IT industry has seen him speak at international conferences, operate in presales environments and conduct actual design and delivery services.

David also has extensive experience in delivery operations. David has worked in the financial, mining, state government, federal government and public sectors across Asia Pacific and the US