image
AWS EKS and Kubernetes Cluster Deployment
Start course
Difficulty
Advanced
Duration
2h 11m
Students
1995
Ratings
4/5
Description
In this advanced course, we take a legacy monolithic .Net application and re-architect it to use a combination of cloud services to increase scalability, performance, and manageability. 
 
Learning Objectives

This course will enable you to:
  • Understand the principles and patterns associated with microservices
  • Understand the principles and patterns associated with Restful APIs
  • Understand important requirements to consider when migrating a monolithic application into a microservices architecture
  • Understand the benefits of using microservices and associated software patterns and tools to build microservice based applications at speed and scale
  • Understand tradeoffs between different architectural approaches
  • Become familiar and comfortable with modern open source technologies such as Dotnet Core, Docker, Docker Compose, Linux, Terraform, Swagger, React
  • Become familiar with Docker and Container orchestration runtimes to host and run containers, such as Docker Compose, Amazon ECS using Fargate, and Amazon EKS

Prerequisites

  • A basic understanding of software development
  • A basic understanding of the software development life cycle
  • A basic understanding of Devops and CICD practices
  • Familiarity with Dotnet and C#
  • Familiarity with AWS
Intended audience
  • Software Developers and Architects
  • DevOps Practitioners interested in CICD implementation
  • Anyone interested in understanding and adopting Microservices and Restful APIs within their own organisation
  • Anyone interested in modernising an existing application
  • Anyone interested in Docker, and Containers in general
  • Anyone interested in container orchestration runtimes such as Kubernetes

Source Code

Transcript

Okay, welcome back. In this demonstration, we're set up our tooling to allow us to communicate and create our EKS clusters. So there are three tools that we're gonna install. The first is kubectl, the second is the AWS IAM Authenticator, and the third one is a helpful utility called eksctl. So let's now discuss each of these three tools. So those unfamiliar with, kubectl. Kubectl is a command line interface for running commands against Kubernetes clusters. The AWS IAM Authenticator is a tool that allows you to use AWS IAM credentials to authenticate against Kubernetes clusters, and eksctl provides a nice abstraction for creating clusters, it's a command line interface tool and provides a very simple method for bringing clusters up. 

As we'll see later on, all you need to do to bring up a EKS cluster is run eksctl create cluster to bring up a EKS cluster is run eksctl create cluster and underneath, it will take care of all of the wiring up of the various individual components. Okay, let's begin the installation. So the first thing we'll do is we'll navigate to the Getting Started with Amazon EKS Guide that AWS provides. Navigating down, we'll jump to the instructions for installing kubectl, as you can see, Amazon has provided operating system specific binaries for kubectl, since I'm running on a Mac, I can use the MacOS version or simply, I can copy this command here. We'll jump into the terminal. I'll quickly do a directory listing, that cmd, I'll paste the command that I just copied, and this will download the version specific for MacOS. Okay, that's finished, we'll do a directory listing again, and you can see that we've downloaded the binary. Next, we'll add execute permissions to it, so, we do so by running chmod, +x, and then the binary.

 Do a directory listing again, and you can see now that it's got the execute permissions. From here we want to move or copy, the binary kubectl, the binary kubectl, into our /usr/local/bin/ directory. And to ensure that we run this particular version of kubectl we need to update our path variable. So we'll prepend the directory that we just copied it into to the PATH environment variable, and then, we can simply run the command, we'll query the version of it, we only want the client version portion, and here we see we've got version 1.10.3, now it's very important that you run with the 1.10 or greater version of kubectl, as this particular version is designed to integrate with the AWS IAM Authenticator, which we'll install next, so that's a good start. Okay, we'll jump back to the AWS documentation, Okay, we'll jump back to the AWS documentation, and the next thing we'll do is install, the AWS IAM Authenticator, again AWS provides operating system versions for this particular binary, and again, because we're running on MacOS, I'll just copy this command here. I'll jump back into my terminal, paste the command, and this will download the AWS IAM Authenticator locally. 

'll do a directory listing, we can see we've downloaded the AWS IAM Authenticator, I'll add execute permissions to it. Again, that's now executable. We'll again copy it into /usr/local/bin/, We'll again copy it into /usr/local/bin/, because we updated the PATH environment variable previously, we know this will run when we request for it. So now we can test it by running aws-iam-authenticator with the help parameter. And here you can see that this particular binary has run successfully, and finally, to install the eksctl utility, we can do brew install weaveworks/tap/eksctl, we can do brew install weaveworks/tap/eksctl, so here you can see that I'm being informed that I've already got this installed, as expected, but I can upgrade it by running brew upgrade eksctl, which I'll do. brew upgrade eksctl, which I'll do. Okay, so that's completed successfully, so we can now test it out eksctl and we can run it with the parameter help, and again, you can see that that's completed, so the binary has been successfully installed. Okay, the next prerequisite that we need to ensure has been installed is the AWS CLI, so we can simply run, aws iam get-user so we can simply run, aws iam get-user to see whether we have the AWS CLI set up, so it's installed on this machine, but I haven't yet configured the credentials that it operates under, so I'll do that in the background, so here I'll run export AWS_PROFILE-CA with the profile so here I'll run export AWS_PROFILE-CA with the profile that I've got configured, and if I rerun that I've got configured, and if I rerun that command, I can see now that I'm authenticated against my AWS account, so everything is configured locally, I can now go back to the eksctl command and run help again, I can now go back to the eksctl command and run help again, I can see that there's a get to check for particular resources, so one final thing we'll do in this demonstration is to run eksctl get clusters, to run eksctl get clusters, to run eksctl get clusters, and this should return back empty because I don't have any EKS clusters set up.

 Okay, so that succeeded and as expected it's come back with no clusters found. We're gonna use the eksctl tool, which we installed in the previous demonstration, to do all of the heavy handling for us, so, before we start, let's just quickly review how eksctl is used to create clusters. So on their website, it's very well documented in terms of the parameters that can be used, you can simply run one by running eksctl create cluster, and that cluster will kick off with a number of defaults, including, it will provision 2x m5.large nodes, including, it will provision 2x m5.large nodes, for the workers, and it will use the AWS official AMI image, for the workers, and it will use the AWS official AMI image, and the placement will be into the us-west-2 Oregon region. Beyond that, you can customize further the provisioning process for your cluster, for example, you can specify a custom name for your cluster and you can specify the number of nodes, or worker nodes, that you want, another interesting thing you can do is to use auto scaling for the worker nodes, so in this case, you're setting the --nodes-min to three, and at the other end you're setting --nodes-max equal to five, so that will create an auto scaling group for the worker nodes and will scale in the worker nodes and will scale in and out between three and five. 

Okay, let's jump into the terminal and we'll begin the process, so, we'll type eksctl create cluster, so, we'll type eksctl create cluster, we'll give it a custom name, we'll call ours Cloud Academy dash Kubernetes, we'll call ours Cloud Academy dash Kubernetes, we'll put it in the Oregon region and we'll specify the SSH key that we'll use to SSH onto the worker nodes, and finally we'll specify the number of worker nodes that we want, in this case we'll go with four and we'll specify that the worker node type and we'll specify that the worker node type will be m5.large, okay, so kicking that off, will be m5.large, okay, so kicking that off, in the background eksctl will start provisioning the Kubernetes cluster, so straight away, we start to get some feedback, a couple of interesting things you'll notice, eksctl is using CloudFormation, and specifically it's going to launch two CloudFormation stacks, so, currently right now it's launching the first of the two stacks and this will be for the provisioning of the AWS Kubernetes managed service control plane, of the AWS Kubernetes managed service control plane, which will contain the Kubernetes master nodes, once this CloudFormation stack completes eksctl will then kick off the provisioning, using CloudFormation again, to create the worker nodes and join them into our cluster, so we'll let that bake, it will take about ten minutes. 

Okay, excellent, that has fully completed and we now have an AWS managed service Kubernetes cluster, looking at the timings, you can see that we started roughly at 13:30 and we finished at 13:45, that we started roughly at 13:30 and we finished at 13:45, so that's about 15 minutes for the in between process to complete, so it's not instantaneous, but, having said that, to have a fully working Kubernetes cluster, created in 15 minutes, is still something to be very happy about, so again, reviewing the output, a couple of things that we should take note of, so this particular cluster stack creates the managed service control plane into which the Kubernetes master node's are provisioned. 

The second stack is the worker node stack, into which our four worker nodes will be created and provisioned in, down here you can see each of the 4 nodes, and also that eksctl has updated our of the 4 nodes, and also that eksctl has updated our kube config file with the connection information for our cluster, so let's take a look at this file. You can see here that we have a cluster, the certificate authority data has been pasted in, we've got the server endpoint, at this stage we can now simply run kubectl, and we could do get services, and we could do get services, and kubectl will have been configured to use our cluster that we've just provisioned, and here you can see that we've got output from our AWS managed service Kubernetes cluster, which is an excellent outcome, again we can re-run the same command and this time we'll add in --all-namespaces and this time we'll add in --all-namespaces and here we can see a couple of services that run as part of the cluster, okay, let's jump over into the AWS console, and the first thing we'll do, is well take a look at CloudFormation, and in here we should see our two CloudFormation stacks that were created, and indeed we do, so the first one, again, is for the control plane into which the master nodes are provisioned, and the second one creates the worker nodes, that are then joined into the cluster, okay, let's now take a look at the EKS console, so we navigate into it, we click on clusters, and here we can see the cloudacademy-k8s cluster that we just created, so we'll click it, so here we can see all of the specific settings for the cluster itself, in particular, we've got the API server endpoint, and the certificate authority, now jumping back into the terminal, again if we have a look at the .kube/config file, you'll see that the certificate authority data here, is the exact piece of data that is represented here, likewise, with the API server endpoint that is represented here, and this is the beauty of the eksctl tool, it performs all this wiring and plumbing for us, so that we don't have to manually configure the config file.

The end result is that this information is used to perform both the connection and the authentication to the Kubernetes cluster, here we can see the cluster name and the user, here we can see the cluster name and the user, where this user, under the users section, uses the AWS IAM Authenticator, and in doing so is able uses the AWS IAM Authenticator, and in doing so is able to establish authentication against the Kubernetes cluster, and once that is complete we can then perform the kubectl commands to it, jumping back into the AWS console, let's now go to the EC2 servers, let's now go to the EC2 servers, in here we'll be able to see our worker nodes, so if we order by name, you can see here that we've got our four worker nodes, and that they are a m5.large, and they're distributed across each of the available zones in the VPC, now the VPC that hosts these worker nodes was created as part of the eksctl create cluster command, so selecting the first worker node, if we take a closer look at it, we can see that it has private IP of 192.168.149.200 that it has private IP of 192.168.149.200 and then it's being provisioned with many secondary private IPs, and all of these IPs are bound to the first ethernet interface, eth0, so all of these secondary private IPs will be used by the AWS Kubernetes CNI plugin, the AWS Kubernetes CNI plugin, and they will be allocated to each of the pods that spin up on this particular worker node, jumping back to the terminal, let's take a look at the resources that were created for each stack, so we need to give it the stack name, so here we're using the AWS CLI, and the stack name we can retrieve from our output from the create cluster command, so we'll take the first one, and we'll pipe it out to jq, and we also need to specify a region, so here you can see all of the resources that were created, so we're creating an AWS EKS cluster, some security groups, we're creating an internet gateway, an IAM policy, a route, route table, subnet route table association, a route, route table, subnet route table association, some subnets, and the VPC that hosts all of them. 

So in the second one, which is our node group, let's take a look, so we take the name of the node group stack, enter, and this time we're creating some security group egress, rules, ingress rules, an auto scaling group for the nodes, an instance profile, launch configuration, an IAM policy, and then a security group, so that gives you some background as to what the eksctl create cluster command actually does and how it does it. So before we do the deployment, let's go over the internals of the architecture, as seen here, the microservices based architecture is composed of four containers, three backend service containers, each exposing a RESTful API, there is the accounts service, the inventory service and shopping service, and a frontend presentation layer container, within the same project we have an additional method for doing the frontend presentation rendering, in this case we're using React, so navigating to the same host, but with the path /home/react, we'll use React on the frontend, meaning the browser will make AJAX calls directly back to our service components, so we'll also demonstrate this, so here we can see our store2018 microservices application is declared amongst a number of Kubernetes yaml files, so let's open this up in Visual Code, we'll start off in the deployment folder, and we'll look at our backend services, in this case we're looking at the accounts service, here we can see that we've declared our accounts service as a deployment kind, it's given a name, as a deployment kind, it's given a name, in this case store2018-accountservice, it's given an app label set to accountservice, we're requesting three replicas matching on the app label, set to accountservice, and then the particular image that we're gonna launch, is declared here, together with the container port set to port 80, so, let's jump back into the terminal, and we'll launch this as a deployment on our EKS Kubernetes cluster, we'll jump into the deployment folder, and then we'll run the command kubectl apply -f, for file, and then we'll run the command kubectl apply -f, for file, store2018.accountservice.yaml, enter, store2018.accountservice.yaml, enter, so now if we run again kubectl get deployments, so now if we run again kubectl get deployments, we should see our accountservice deployment, here we've set the desired to three and currently we're running three, if we do a get pods, you can see our three pods if we do a get pods, you can see our three pods that make up this particular deployment, so that's a great result, next, we'll launch the other two backend services, so again, kubectl apply -f, we'll do the inventory service, so again, kubectl apply -f, we'll do the inventory service, followed by the shopping service, and then finally, the presentation layer frontend service, so at this stage, we've launched four deployments, and each deployment requests three pods, so again, if we run, get deployments, you'll see that we've got our four deployments, each with three pods, that we've got our four deployments, each with three pods, so we're looking good, the next thing we'll do, is jump back into Visual Code, and this time we'll open the store2018.yaml file.

Now the reason being, is we want to focus on understanding how it knows to contact the downstream service components, so the store2018 presentation layer is making synchronous calls, and is contacting our downstream services, based on three environment variables that we're passing in, the ACCOUNT_SERVICE_API_BASE, the INVENTORY_SERVICE_API_BASE, and the SHOPPING_SERVICE_API_BASE, so when the store2018 pods launch, they'll make downstream synchronous calls to each of these service pods using the environment values declared here, now there's nothing special in that, apart from the fact that the host name is actually a service discovery name managed by Kubernetes, now that is the point of interest here, so for the ACCOUNT_SERVICE_API_BASE, it's making a call to store2018-accountservice, which is a service discovery managed endpoint, if we copy this value here, and we go back to the equivalent account service yaml file, you'll notice that that is this name here, likewise, the store2018 inventory service, comes from this name within the equivalent inventory service yaml file.

Okay, next, let's take a look at the way we declare our Kubernetes services under the service folder, so a service within Kubernetes is used to expose the backend deployment as a publicly accessible endpoint, we're exposing port 80, which is then forwarded on to the container port, port 80, and uses a type of LoadBalancer, so with this particular declaration, in the backend, EKS will launch a classic ELB, to which we can form connections against, okay, let's jump back into the terminal and launch our services, so, we navigate into our service folder, and then we'll launch each of our services, we do so by running kubectl apply -f, we do so by running kubectl apply -f, and we do this once for each of these service files. Okay, so, this will result in four services being launched on Kubernetes, now if we run kubectl get services, now if we run kubectl get services, we should see each of our services that we've just launched, where each is of type LoadBalancer. 

Now, if we rerun the command, but with the -o wide parameter, we'll be able to see the full external IP address, so in this case, the external IP is represented by the DNS name generated by AWS as highlighted here for the store2018 service, and has an internal cluster IP address of 10.100.108.130. Let's jump over into AWS, Let's jump over into AWS, and we'll take a look at the load balancers. Clicking on Load Balancers, here we can see that AWS in the background, is part of the EKS managed service, has launched our four ELBs, let's jump back into the terminal, and at this stage, let's now do a curl, let's now do a curl, against the accounts service, so we'll copy this portion, we'll go to api and consumers 5 to get the fifth consumer, we'll go to api and consumers 5 to get the fifth consumer, excellent, so we've got our result, so what we're doing here is simulating a call to the accounts service, next we'll confirm the inventory service is up and running, so again we go curl http api and this time we call products, so again we go curl http api and this time we call products, and here we're getting a list of products that we sell on our e-commerce website, let's rerun that same command but we'll pipe it out to jq to get some formatting on it, so here we can see the result, much more clearer now, when we call the inventory API, and finally, let's confirm the shopping service deployment ELB, shopping service deployment ELB, curl, http, /api, cart, and we'll go for 5. curl, http, /api, cart, and we'll go for 5. curl, http, /api, cart, and we'll go for 5. Excellent, so we got our results, returned as json, for the shopping cart API, again, if we pipe this out to jq, this time we'll add in the silent flag, and, again, you can see the result of calling this particular API, so, calling the shopping cart API shows this particular API, so, calling the shopping cart API shows us the items that the user has put into their shopping cart, so everything is looking good. 

The final service, that we need to test, is a call to the frontend, so let's, take, the DNS name, so let's, take, the DNS name, for the presentation container, as exposed by an ELB, and we'll jump into our browser, and we'll simply paste it here and if everything goes well we should get our presentation layer, which we do, which is an absolutely fantastic result, so here the presentation layer has been service layer rendered using ASP.Net Core in which backend calls have been made synchronously to our backend microservice components that we had just tested in the terminal, so, altogether, this is an absolutely fantastic result, and shows the real power of building a microservice architecture leveraging containers, so jumping back into our architecture diagrams, this particular one is the deployment that we've just done on EKS, where we've got four containers, we have the presentation layer container and that makes syncrhonous calls to our backend services, if you recall, we mentioned that we also had a secondary implementation for the presentation layer, one in which, uses a React based model, whereby the frontend, as rendered within the browser, makes direct AJAX calls back to our service components, so let's take a look at this, so this particular implementation exists on the path /home/react, so we'll go back to our frontend, and this time, we navigate to home/react. So this is the result the first time we make a call to this, so here we're missing a couple of things, we're missing our products, and we're missing the logged on consumer, so why is that? So let's jump into our developer tools to examine the network calling, so we'll do a reload, so here we can see that the frontend React AJAX generated network calls have failed, and that the likely issue, is shown here, is the DNS name has not resolved, now in this particular setup, I'm using Route 53, and I have these DNS names already registered, I need to go into Route 53 and update them as pointers to the ELBs that the EKS managed service created for us, to the ELBs that the EKS managed service created for us, so let's jump back into our terminal and we'll get the address for the accounts service, so this is the ELB address, we'll copy that, we'll jump into AWS Route 53. 

On Hosted zones we'll go to democloudinc.com, so this is the particular Route 53 record that we need to update, so in the value box, we'll paste in the ELB address, clicking Save Record Set, and then we need to repeat this for the inventory service, so again, we'll go back to the terminal, we'll copy the ELB name that EKS generated for us, for the inventory service, back to Route 53, and then updating the api-products DNS record, save record set, now we need to give this a little bit of time for the previous records to expire, we'll take a copy of one of our records and we'll jump into our terminal and we'll test to see whether it resolves to our new ELB. So here in the answer section it's resolving to afbf, and if we scroll up, our account one should be going to afbfe, so it looks good, likewise we'll now repeat this for the products inventory endpoint, and this one resolves a03011, and, that looks good. Okay, so we can now test the full implementation, Okay, so we can now test the full implementation, api-accounts-democloudsinc.com/api/consumers/5, api-accounts-democloudsinc.com/api/consumers/5, silent flag, pipe it out to jq, silent flag, pipe it out to jq, and that has worked, so that is a good result, and we'll also test it for products, remembering that the products path is /products, excellent, so that is all working, so we'll jump back into our browser, we'll go to our frontend, and we'll do a reload, we'll go to our frontend, and we'll do a reload, so again, when we examine our AJAX calls, we can see that something's not quite right, we've managed to resolve the names, and we know that's working based on the tests that we just did in the terminal, but for some reason, we're not getting our result back, so if we click on one of these links, and we look at the headers, the reason being, is that we're actually making our requests to an SSL or HTTPS endpoint, now we haven't actually set up HTTPS on our ELB, so we need to do that to fix this last problem, so let's jump back into the terminal, and, and we'll run kubectl get services again, -o wide, and we'll run kubectl get services again, -o wide, and then for the accounts service, we'll copy this part, go back to EC2, so under Load Balancers, we'll search for this particular load balancer, so this is our account service load balancer, we'll go to listeners, we can see here that we're quite clearly only setup to listen on port 80 for HTTP calls, so let's edit this, to listen on port 80 for HTTP calls, so let's edit this, we'll use SSL Secure TCP and the reason we need we'll use SSL Secure TCP and the reason we need to do this is that the instance protocol has to do this is that the instance protocol has to be TCP on the backend, we use the same to be TCP on the backend, we use the same Instance Port which EKS manages for us on our cluster and we'll pick a SSL certificate that we've already provisioned with an ACM, in this case our wildcard certificate, okay, we'll save that, and we'll go close, we'll repeat this also for the inventory service, we'll repeat this also for the inventory service, so in this case we'll copy this string here, and this is our inventory ELB, edit, add, choose SSL, and this is our inventory ELB, edit, add, choose SSL, 443, TCP on the backend, same port, 443, TCP on the backend, same port, 443, TCP on the backend, same port, and same wildcard ACM certificate, save, close, and same wildcard ACM certificate, save, close, and then finally for each ELB we need to update, the corresponding security group on it to allow 443 incoming connections, edit, add rule, HTTPS from everywhere, save, HTTPS from everywhere, save, and likewise for the other ELB, inbound, edit, add, inbound, edit, add, HTTPs, 443, from everywhere, save. 

HTTPs, 443, from everywhere, save. Okay, so if we jump back to the terminal, and this time, and this time, and this time, we'll go to HTTPS, enter, excellent, so now we're able to make HTTPS connections, via the ELB to our backend services, we'll test it for the products endpoint as well, and again, that has worked, so, finally, we'll jump back into our frontend, just back into network, we'll clear, and we'll do a reload, and this time we see that our AJAX calls have a 200 on both of them, if we drill in, we can see the response, excellent, we'll close that down, so if we look at the user interface, in the top right hand corner, we have the user, which is provided from the accounts service, and we have our products for sale, which comes from our inventory service, well, the products API endpoint, so again, this is a great result and shows that we've launched a microservices application using two different user interface rendering techniques, one using server side ASP.Net Core rendering and the other using client side React rendering, so again, jumping back into our architecture diagrams, what we've just completed configuring is this particular version of our store2018 application, so again the frontend is using React to make client side AJAX calls via the AWS EKS created ELBs all the way through the AWS EKS created ELBs all the way through to our backend microservice components as launched and deployed on our EKS cluster. Okay, that completes this demonstration, go ahead and close it and we'll see you shortly on the next one.

About the Author
Students
142970
Labs
69
Courses
109
Learning Paths
209

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).