The course is part of this learning path
Store2018 Build and Deployment
This course will enable you to:
Understand the principles and patterns associated with microservices
Understand the principles and patterns associated with Restful APIs
Understand important requirements to consider when migrating a monolithic application into a microservices architecture
Understand the benefits of using microservices and associated software patterns and tools to build microservice based applications at speed and scale
Understand tradeoffs between different architectural approaches
Become familiar and comfortable with modern open source technologies such as Dotnet Core, Docker, Docker Compose, Linux, Terraform, Swagger, React
Become familiar with Docker and Container orchestration runtimes to host and run containers, such as Docker Compose, Amazon ECS using Fargate, and Amazon EKS
A basic understanding of software development
A basic understanding of the software development life cycle
A basic understanding of Devops and CICD practices
Familiarity with Dotnet and C#
Familiarity with AWS
Software Developers and Architects
DevOps Practitioners interested in CICD implementation
Anyone interested in understanding and adopting Microservices and Restful APIs within their own organisation
Anyone interested in modernising an existing application
Anyone interested in Docker, and Containers in general
Anyone interested in container orchestration runtimes such as Kubernetes
- Welcome back! In this lecture we are going to perform the next evolution of our microservices architecture. Whereby, we uplift it and host it on cloud infrastructure. In this case, we are going to go with AWS as our cloud provider, and host it in the elastic container service and in particular, use Fargate as a manage service that will maintain and run our cluster instances for us. Additionally, we will use Service Discovery, which is a new feature that comes with ECS. Service Discovery allows us to find and discover the easiest task as they spin up and in this case, we are able to use private den ious names that will result to IP addresses assigned to the elastic networking interfaces which are bound to each of their task.
At the front end of our AWS ECS Cluster, we'll provision a application load bouncer. This will allow us to route a load balance traffic across our ECS Cluster task. For each component in our design, we'll split up two task. Therefore, all calls coming in will be round robined over those two task. Also in this lecture, we're going to leverage terraform. If you are unfamiliar with terraform, terraform is a infrastructures code, or programmable infrastructure tool, that allows you to declaratively specify your infrastructure. So let's get started. This time we will use visual code to create our terraform templates. Okay back within our terminal in the project root folder. We will create a new directory called "terraform" We will navigate into this directory and then we will start up visual code within it. Okay, from here we will add three files. This one will be called "main.tf" The second one will be called "outputs.tf" And the third one will be called "variables .tf" for terraform.
Okay. So to keep the demonstration flowing along as quickly as possible, I'll paste in blocks of code and then discuss what each block of code accomplishes. Starting with the variables.tf file, we'll paste in all of our variables that will be used by terraform and in particular within the main.tf file. Let's cover a few of the more important variables. Firstly, we have the aws_service_discovery_namespace_name. This tracks the name of a private zone that will be created within route 53, and into which the private IP's that are allocated to the ENI interfaces assigned to any of their task, that are provisioned, will be registered. Now towards the bottom of this file, we track four variables. One for each of the darker images that we previously built and registered within darker hub. So the first one we have, is for the account service. And we see the name for the darker image. Likewise for the inventory service, shopping service, and finally the store 2018 fronting darker image.
As you will later see, we reference these variables in the main.tf file. In particular, we reference these variables when we go to register our task definitions within ECS. Okay, now lets jump into our main.tf file and begin to declare our AWS infrastructure. The first thing we need to do is specify how terraform itself will track state in terms of what operations have completed, what infrastructure exist, etc. To do so, we specify terraform with the backend of S3 and a backup called terraform.state.microservices.net. So we need to go and create this backup. Which I'm going to S3. We specify the backup name and then click create to create the backup. Okay, that's completed successfully. Following on from this, we will paste in a large section of terraform commands that will create our networking resources within AWS. We won't go into this in too much detail but for summary purposes, what we are doing is setting up a VPC. Inside the VPC, we create some subnets, private and public. We specify an internet gateway. We set up a route. We provision an elastic IP address which gets assigned to the net gateway. We spin up a net gateway. We establish a private route table.
And finally we specify security group roles. Okay at this stage, let's save main.tf and then we will jump into our terminal and we'll run up terraform and provision our network. Back within our terminal. The first thing I'll do is establish the AWS credentials that terraform will use to give it permissions to launch infrastructure within AWS. Now the way I like to work, is to establish shell environment variables. As you can see here, these variables will track the access key, the secret key and the default region for AWS. I won't show you the ones I am using here for security purposes but in the background these will have been established. The next thing I will do is to run terraform in it, to initialize the current directory as a terraform directory.
This one also setup how it's three backend which will track our terraform state and additionally will download any third party libraries that terraform will use to provision our AWS infrastructure. Okay, there it is initialized successfully. Next, we will take a look at the terraform workspace configuration. So terraform workspaces allow you to setup different environments on AWS so we can reuse the same terraform templates but we can instead spin up a staging environment and/or production environment and/or artist environment. So if we do terraform workspace list, this will show us the currently configured workspaces. Okay next, we'll again run terraform workspace. This time we will create a new one called "prod". This workspace will represent a production environment that we spin up on AWS. And likewise we will repeat the command and create a second workspace called "staging" which will represent a staging environment on AWS.
Then finally, we'll go back and select "prod" as the current workspace. So in this case, when we do a terraform apply the infrastructure that will be created will have been created within this workspace representing the production environment. Okay. Now, we do a terraform plan, which will tell us what will be created without actually creating it. This gives us a chance to validate what we are about to do without doing it. This will report to us all of the resources that terraform will provision if we then go ahead and do a terraform apply. So as can be see here, the plan shows that we're about to add seventeen resources. Okay. Let's now go ahead and create these resources on AWS by running terraform apply. Okay. We type "yes" to proceed.
And at this stage here, terraform will begin provisioning resources on AWS. We'll speed this up and we'll see what we get at the other end. So at this stage, we have successfully created a VPC and AWS. Let's now drop into the consol and have a look at what was created. So in the VPC dashboard, if we click on VPC's, we can see, for example that, a VPC named "prod" has been created with a slider edgers block of 192.168.0.0/16. This corresponds to the, to the slider block range as specified in our variable vpc_slider_prod. Finally, let's jump back into S3 and take a look at what's in our state bucket. So navigating down to terraform.state.microservices.net, and here we can see that we have a prod sub-folder and a staging sub-folder.
If we navigate into the prod folder, we can see that we have got a state.tfstate file. This file is tracking the current state of everything that we have provisioned within AWS to date, so that next time when we run terraform.apply, we're already building any changes that we've introduced. Okay.That completes this lecture. Go ahead and close it, and we will see you shortly in the next one, where we continue to build out our ECS Cluster.
About the Author
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.