Running containers on ECS Fargate
This lesson of ECS Fargate aims to simplify your view of container deployments and helps you to see them as tools that give you flexibility and predictability for your application.
- Networking considerations prior to deploying ECS containers
- IAM Security requirements for your application
- How to configure your deployment using Task definitions
- How to manage secrets, parameters, and capacity
DevOps Engineers and System Administrators
- General understanding of Docker
- Basic knowledge of AWS services, such as EC2, S3, and EFS
Let's go ahead and take a look at some of the requirements that you're going to have prior to deploying containers in AWS. I have a diagram here, so let's talk about it. Essentially, the very first thing you're going to need is a VPC. You're going to need to define your own Virtual Private Cloud and within that VPC, that's when you're going to have to start defining whether you're going to have public subnets, you're going to have private and if so, how many private subnets.
For example, in this diagram, we have a single public subnet which we're using for, let's say in this case a NAT gateway so that we can route outbound traffic from our containers as needed for reasons of patching or accessing third-party API's and so on. And of course, you have the NAT gateway there providing that service. Now, in this case, you have containers running. Both of these availability zones are hosting some of your containers and those containers, they're probably going to need AWS services as well. For example, you have an SSM endpoint.
This is probably a systems manager for the purpose of allowing you to connect to your container host for troubleshooting purposes. Also, perhaps you could do a parameter store. You could access environment variables that are stored in the parameter store directly from your containers, assuming of course they have the necessary permissions. And also you see ECR endpoints and S3 in case you need to download certain objects or more specifically download specific images of your containers from the ECR repository.
So, this is more or less of an ideal environment. So, be sure to define something like this. Like I said, you will need private subnets ideally. You don't want to be having containers that are directly exposed into the Internet or they can be attacked easily. So, you want to have your main host running a private subnet. If you need to expose them to the outside world, you can always set up a load balancer in a public subnet and then route the traffic back to the containers that are running in private subnets.
Something that we don't talk about very much is about subnet capacity. So, if you're going to be running large auto-scaling groups, also consider the size of your subnets. Let me show you what I mean. I'll switch over to the console. Click on VPC and then on subnets. I'm going to select this one here that says ECS demo slash 24. So, this one has 250 available IP addresses at the moment. So, this is something to keep in mind if you're going to be deploying many hosts that are going to require IP addresses in your specific private subnet. In this case, be sure that you have enough available IP addresses, even with the room for expansion in the future. So, don't lock yourself into a certain very, very small subnet. And then whoops, what are we going to do now? We need to grow this somehow. So, then you're going to have to tweak your networking before you can even consider expansion. Let's switch over to ECR. I'll type ECR right here.
So, as I mentioned, you're probably not going to be running public containers that you download off the Internet. You're probably going to be creating a Docker file that's going to download those public containers and then from those you're going to generate your own containers hosting your own custom application. And of course, you need a place to store those generated artifacts and the place is going to be the Amazon Elastic Container Registry. So, for example, I have this one called ci-cd-demo and there's an image here with the tag latest. As you can see I have several copies because I've been generating a few of these. So, you have this one tag that's latest now. So, meaning that when you do your deployment you're going to be getting this particular copy that is identified as the latest build. There's a lot of things here within the elastic containers.
For example, you can run scans on your containers, vulnerability scans. You can set lifecycle policies for example if you want to delete the all copies after a while. So, but none of that matters. What really matters is that this is a safe and secure place for your own custom containers that is not exposed to the Internet in any way. There's one last consideration for you before moving on, and we're going to switch over to IAM for that. We're going to click on roles within IAM and I'm going to show you one that I have right here.
Let me see if I can find it. I think it's called Task. There we go. This is the ecsTaskExecutionRole. Right now it's quite simple. I just have it as TaskExecutionRolePolicy and CloudWatchLogsFullAccess. That's all I have at the moment. But my point is, you're going to be assigning this particular IAM role to your task definition, and the task definition is essentially the configuration to deploy your containers, and in here is where you're going to add the necessary permissions.
So, for example, if you want to attach something like access to Secrets Manager to your container, you will attach the proper policy here so that your containers actually can access Secrets Manager. The same if you wanted to do it with the Systems Parameter Store or perhaps some other AWS service like S3 that you wanted to access from your containers. It would need to have the proper permissions. And of course, you can do that by defining those proper authorizations right here in the ecsTaskExecutionRole. That's just the name, you can name it whatever is more appropriate for your application and your naming conventions. But I just decided to call it ecsTaskExecutionRole because I reuse it across several projects. That's it really. That's all you need to know in terms of prior requirements before you go ahead and deploy your first cluster. Remember you'll need a VPC with some subnets, ideally private subnets.
You may need to define endpoints or somehow access to your AWS services. In this case, you know System Manager, ECR, S3, or whatever your application may need. Initially, of course, if you're just learning, you don't need to define any access to additional services. You just want to deploy something and get your hands dirty, which is what we're going to be doing up next.
Software Development has been my craft for over 2 decades. In recent years, I was introduced to the world of "Infrastructure as Code" and Cloud Computing.
I loved it! -- it re-sparked my interest in staying on the cutting edge of technology.
Colleagues regard me as a mentor and leader in my areas of expertise and also as the person to call when production servers crash and we need the App back online quickly.
My primary skills are:
★ Software Development ( Java, PHP, Python and others )
★ Cloud Computing Design and Implementation
★ DevOps: Continuous Delivery and Integration