This course will enable you to:
-
Understand the principles and patterns associated with microservices
-
Understand the principles and patterns associated with Restful APIs
-
Understand important requirements to consider when migrating a monolithic application into a microservices architecture
-
Understand the benefits of using microservices and associated software patterns and tools to build microservice based applications at speed and scale
-
Understand tradeoffs between different architectural approaches
-
Become familiar and comfortable with modern open source technologies such as Dotnet Core, Docker, Docker Compose, Linux, Terraform, Swagger, React
-
Become familiar with Docker and Container orchestration runtimes to host and run containers, such as Docker Compose, Amazon ECS using Fargate, and Amazon EKS
Prerequisites
-
A basic understanding of software development
-
A basic understanding of the software development life cycle
-
A basic understanding of Devops and CICD practices
-
Familiarity with Dotnet and C#
-
Familiarity with AWS
-
Software Developers and Architects
-
DevOps Practitioners interested in CICD implementation
-
Anyone interested in understanding and adopting Microservices and Restful APIs within their own organisation
-
Anyone interested in modernising an existing application
-
Anyone interested in Docker, and Containers in general
-
Anyone interested in container orchestration runtimes such as Kubernetes
Source Code
Welcome back. In this lecture, we'll continue using Terraform to build out our ECS Fargate cluster, which is going to host our microservices application. In the previous lecture, we set up Terraform and then used it to create the underlying networking components such as the VPC, public subnets, private subnets, route tables, internet gateway, net instance, and security groups. In this lecture, we'll focus on creating the remaining AWS components in our design such as the application load balancer, the ECS Fargate cluster, including service discovery and task definitions for each of our four docket images. Once all the components of our design have been Terraformed, we'll be able to use our browser and navigate directly to the application load balancer, bringing up our Store2018 user interface. Okay, let's jump back into Visual Code and continue building our Terraform templates.
So the next section we're going to add in is the application load balancer code. So gonna create an application load balancer. It's going to be deployed into our public subnets and it's going to have the load balancing security group attached to it to allow incoming traffic from port 80 or port 443. Next, we create a target group. And on that target group, we specify health_check to path/healthcheck. Jumping back into the source code of our Store2018 front-end, if we look under Controllers, of editor named HealthCheckController. Now, all this is gonna do is return the message OK if it gets called. Meaning that response for the health check will be an HTTP 200, with the body set to the message OK.
Next, we create a front-end listener for port 80 and protocol HTTP, where the default action is to forward and to forward down to our target group. Next, we're gonna create a route53 arecord, which maps to an alias, in this case, the application load balancer that we've just created. The name that we use is going to be conditional upon the Terraform workspace we're working in. So if it's prod, we're going to use the variable dns_prod_subdomain. So if we have a look in our variables, we can see that dns_prod_subdomain is set to store2018. And dns_staging_subdomain is set to store2018test. Okay, that completes the application load balancer design. Now, let's add in the ECS and service discovery components. The first resource we're setting up here is service discovery. And in particular, we're creating a private DNS namespace inside route53. Now, the point of doing this is to allow our tasks, as they spin up, to register the private IP addresses within route53. This allows us to call a private DNS name later on to connect with any of those particular tasks, rather than having to know ahead of time the private IP address.
The second resource is the ECS cluster itself. Later on, when we add in our task definitions, when the tasks are spun up, they'll run within this ECS cluster. Next, we're gonna add in the first of our four microservice components. The first one we're adding in is the AccountService. So for each of our microservice components, we're gonna create three resources. The first is the ECS task definition. The second is a service discovery service. And third is an ECS service. So let's take a closer look at each of these resources. So for the AccountService, we're creating an ECS task definition. It will require any containers that launch using this task definition to use Fargate. And therefore, we need to manage the Fargate CPU and memory, which we specify here and here. Inside our task definition, we specify the container definition itself. Now, for image, we leverage the variable app_image_accountservice. Now, if you recall, over in variables, we set this up, so this is the variable here. And it defaults to this tag name, which is the tag name in our Docker hub, jeremycookdev repository. We specify the amount of memory we want.
We give it a name. We set the network mode. In this case, we're using AWSVPC, which is a new networking mode, which assigns an elastic network interface to our container for networking purposes and will be assigned a private IP address from the subnet that it launches in. Next, we specify the port mappings. Now, recall that our latest version of our Docker image is built with an NGINX reverse proxy, listening on port 80. So the container port here will be port 80. We're also gonna use the same port for our host. Next, within our service discovery, we're gonna register a new arecord, api-account, and our service discovery private DNS namespace. Now, what this means is that we're gonna end up with a new route53 record that looks like this. And what this means at runtime is that anyone who makes a request to this service can request of it using this DNS name. Anyone who wants to use this service can resolve this address and get a private IP address to the task that is running behind it. Finally, we create an ECS service for our microservice. Here, we're defining the name of the service, followed by the cluster that it will run in, followed by the task definition for the container, followed by the number of tasks that we want to run, and importantly, setting the launch type to Fargate. So with ECS Fargate, AWS is gonna manage the underlying ECS cluster instances for us, which offloads a lot of administrative requirements. We specify the network configuration for our ECS service. In this case, specifying both the security groups and the subnets that the service will run in.
Finally, we map this service back to our service discovery, so that when tasks within the service spin up, they can be found by using DNS. Okay, that completes the setup for the AccountService. So I'm now gonna add in the InventoryService resources, followed by the ShoppingService resources. Finally, we add in the Store2018 presentation layer microservice. Now, this follows the same pattern as the previous three microservices, except for one point of difference. And that is under the container definitions section, we need to specify our three environment variables. The first of which is the base path to the AccountService. The second is the base path to the InventoryService. And the last is the base path to the ShoppingService. The host portion of this URL is using the DNS name that is being automatically registered as part of the service discovery and route53. Now, the final piece of configuration for the Store2018 microservice is within the service itself.
Here, we're specifying that these service tasks, when they launch, will be automatically registered into the target group that sits behind our application load balancer. Okay, with everything in place, go ahead and save this file. And then we'll jump into our terminal and run a Terraform plan, followed by a Terraform apply to create and provision the remaining infrastructure. Okay, back within our terminal, we run terraform plan. Again, we do so to validate the actions that we're about to perform without actually performing them. So we'll speed this up and we'll check what we get at the end of it. Okay, so the plan is completed. As can be seen, we're going to add 18 new resources. We don't need to change any and we're not destroying any. Okay, so that is as expected. Next, we'll run terraform apply. And this will actually now go ahead and create those 18 resources within AWS. Okay, we agree to the application of these changes by typing yes, clicking Enter. And now, Terraform is gonna go ahead and apply these 18 changes. We'll now speed this up and wait to see what we get at the other end. Okay, excellent. So our Terraforming has completed. We've added 18 new resources.
Okay, let's now jump into our AWS console and see exactly what resources have just been created. So we'll start off in the Elastic Container Service. Here, we can see our production ECS cluster with four services and eight running tasks. So let's jump into the cluster. We can see that we have four services, one for each of our service APIs and one for our Store2018 presentation layer. Each service has two running tasks. And all of them have been launched using Fargate to manage the ECS cluster instances. Okay, let's now go and have a look at route53 and check out our service discovery. Clicking on Hosted Zones, we can see we now have a microservices.private zone. Drilling into it. For each service, we have two arecords registered, one for each task that has spun up. Tracking the private IP address that has been assigned to the elastic network interface that is bound to each task. Okay, let's go back to the ECS cluster dashboard. And on Services, let's update the Store2018 service. And this time, we'll double the number of tasks from two to four. Clicking Next Step. Next Step again. Next Step again. And finally, Update Service. Okay, we'll go back and view the service.
And what we should see is that an additional two tasks are spun up. Clicking on Tasks. We reload. And here, we have two additional tasks in provisioning status. If we reload again. And now, we have all four tasks in running status. So going back to route53. Take note before we do a refresh that for the app-store2018 arecord, it is currently tracking two private IP addresses. Now, refreshing. You can see that service discovery has added an additional two records, one for each of the two new tasks tracking the two new private IP addresses. So here, you can see the power of service discovery and how it can be used quite effectively within your designs. Finally, back within our browser, let's attempt to navigate to the front-end. We do so by typing http://store2018.democloudinc.com, which should resolve to our application load balancer. And this is the correct result. Here, we can see that we're navigating to our microservices architecture, which is now fully hosted on AWS using ECS Fargate and service discovery for the back-end. Okay, that completes this lecture. Go ahead and close it and we'll see you shortly in the next one.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).