Deploying a Microservices Application into EKS

The course is part of this learning path

DevOps Engineer – Professional Certification Preparation for AWS
course-steps 35 certification 5 lab-steps 18 quiz-steps 2 description 3
play-arrow
Start course
Overview
DifficultyBeginner
Duration58m
Students744
Ratings
4.4/5
star star star star star-half

Description

The Introduction to AWS EKS course is designed to aid and equip those, with a basic understanding of web-based software development, to know how to quickly launch a new EKS Kubernetes cluster and deploy, manage and measure its attributes.

In this course, you will learn how to utilize a range of new skills from, understanding how EKS implements and deploys clusters into a VPC and leverages ELBs to expose Kubernetes services, to gaining the ability to use, control, manage and measure an EKS Kubernetes cluster deployed application.

This course is made up of 4 in-depth demonstrations that, at the end of the course, will enable you to deploy an end-to-end microservices web-based application into an EKS Kubernetes cluster.

Learning Objectives

  • Understand the basic principles involved with launching an EKS Kubernetes cluster.
  • Analyze how to set up the required EKS client-side tooling required to launch and administer an EKS Kubernetes cluster.
  • Learn how to use the eksctl tool to create, query, and delete an EKS Kubernetes cluster.
  • Follow basic kubectl commands to create, query, and delete Kubernetes Pods and Services.
  • Explain how EKS implements and deploys cluster into a VPC and leverages ELBs to expose Kubernetes services.
  • Learn how to author and structure K8s definition files using YAML.
  • Gain experience in how to deploy an end-to-end microservices based web application into an EKS Kubernetes cluster.
  • Be able to use, control, manage and measure an EKS Kubernetes cluster deployed application.

Prerequisites

  • High-level understanding of web-based software development.
  • Knowledge of Docker and Containers.
  • Prior experience in microservice architectures.

Intended Audience

  • Software Developers.
  • Container and Microservices Administrators and Developers.
  • Cloud System Administrators and/or Operations.

Source Code: Store2018 Microservices

 

Source Code: Store2018 EKS Kubernetes Deployment Files

 

Related Training Content

Transcript

- [Narrator] Okay, welcome back. In this demonstration, we're going to use an existing microservices project. This time we're going to take the same architecture, and launch it on our EKS cluster that we built in the previous demonstration. The sample microservices architecture presents a e-commerce store. So before we do the deployment, let's go over the internals of the architecture. As seen here, the microservices based architecture is composed of four containers. Three backend service containers, each exposing a restful API. There is the account service, the inventory service, and shopping service. And a front end presentation layer container. The three backend service components have a Nginx reverse proxy embedded, so the calls to the restful API are done to port 80, which is then proxied to an internal port, port 8000. The front end Store2018 container implements service side rendering, using .Net Core, and in particular, ASP.Net Razor Templates. 

So here, when a user makes a URL call to store2018.democloudinc.com, the request goes to the Store2018 UI presentation layer component, which in turns makes synchronous calls to the backend service components. Within the same project, we have an additional method for doing the front end presentation rendering. In this case we're using React. So navigating to the same host, but with the path /home/react, we'll use React on the front end, meaning the browser will make AJAX calls directly back to our service components. So we'll also demonstrate this. Finally, before we begin with our deployment into EKS, all of the docker container images have already been prebuilt, and uploaded into Docker Hub, which acts as our container registry. Okay, from here let's jump into the terminal, and begin our deployment into our EKS cluster. So here we can see our store2018 microservices application is declared amongst a number of Kubernetes yaml files. 

So let's open this up in Visual Code. We'll start off in the deployment folder, and we'll look at our backend services. In this case, we're looking at the account service. Here we can see that we have declared our account service as a deployment kind. It's given a name, in this case store2018-accountservice. It's given an app label, set to accountservice. We're requesting three replicas. Matching on the app label, set to accountservice. And then the particular image that we're gonna launch is declared here, together with the containerPort set to port 80. So, let's jump back into the terminal, and we'll launch this as a deployment on our EKS Kubernetes cluster. We'll jump into the deployment folder. And then we'll run the command kubectl apply -f, for file, store2018.accountservice.yaml. Enter. So now if we run again the command kubectl get deployments, we should see our accountservice deployment. Here we've set the desired to three, and currently we're running three. If we do a get pods, you can see our three pods that make up this particular deployment. So that's a great result. Next, we'll launch the other two backend services. So again, kubectl apply -f, we'll do the inventoryservice, followed by the shoppingservice. And then finally, the presentation layer front end service. So at this stage we've launched four deployments, and each deployment requests three pods. 

So again if we run get deployments, you'll see that we've got our four deployments, each with three pods. So we're looking good. The next thing we'll do is jump back into visual code, and this time we'll open up the store2018.yaml file. Now the reason being is we want to focus on understanding how it knows to contact the downstream service components. So the store2018 presentation layer is making synchronous calls, and is contacting our downstream services, based on three environment variables that we're passing in. The account_service_api_base, the inventory_service_api_base, and the shopping_service_api_base. So when the store2018 pods launch, they'll make downstream synchronous calls to each of these service pods, using the environment values declared here. Now there's nothing special on that, apart from the fact that the host name is actually a service discovery name, managed by Kubernetes. Now that is the point of interest here. 

So for the account_service_api_base, it's making a call to store2018-accountservice, which is a service discovery managed endpoint. If we copy this value here, and we go back to the equivalent accountservice yaml file, you'll notice that that is this name here. Likewise, the store2018 inventoryservice comes from this name within the equivalent inventoryservice yaml file. Okay next, let's take a look at the way we declare our Kubernetes services, under the service folder. So a service within Kubernetes is used to expose the backend deployment as a publicly accessible endpoint. We're exposing port 80, which is then forwarded on to the container port, port 80, and uses a type of LoadBalancer. So with this particular declaration, in the backend, EKS will launch a classic ELB, to which we can form connections against. Okay, let's jump back into the terminal, and launch our services. So, navigate into our service folder. And then we'll launch each of our services. Do so by running kubectl apply -f, and we do this once for each of these service files. Okay, so this will result in four services being launched on Kubernetes. Now if we run kubectl get services, we should see each of our services that we've just launched, where each is of type LoadBalancer. Now, if we rerun the command, but with the -o wide parameter, we'll be able to see the full external IP address.

 So in this case, the external IP, as represented by the DNS name, generated by AWS, is highlighted here for the store2018 service. And has a internal cluster IP address of 10.100.108.130. Let's jump over into AWS, and we'll take a look at the load balancers. Clicking on Load Balancers. Here we can see that AWS, in the background, as part of the EKS managed service, has launched our four ELBs. Let's jump back into the terminal. And at this stage, let's now do a curl against the accountservice. So we'll copy this portion. We'll go to API. And consumers/5, to get the fifth consumer. Excellent. So we've got our result. So what we're doing here is simulating a call to the accountservice. Next we'll confirm that the inventoryservice is up and running. So again we go curl http, API, and this time we call products. And here we're getting a list of products that we sell on our e-commerce website. Let's rerun that same command, but we'll pipe it out to jq, to get some formatting on it. So here we can see the result much more clearer now, when we call the inventory API. And finally, let's confirm the shoppingservice deployment elb. Curl http, /API/cart, and go for five. Excellent. So we got our results. Returned as JSON for the shopping cart API. Again if we pipe this out to jq, this time we'll add in the silent flag, and again you can see the result of calling this particular API. So calling the shopping cart API shows us the items that these are his, put into their shopping cart. So everything is looking good. The final service that we need to test is a call to the front end. So, let's take the DNS name for the presentation container, as exposed by an ELB, and we'll jump into our browser. And we'll simply paste it here. And if everything goes well, we should get our presentation layer, which we do. Which is an absolutely fantastic result. So here the presentation layer has been service-wide rendered using ASP.Net Core, in which backend calls have been made synchronously to our backend microservice components that we had just tested in the terminal. 

So, altogether, this is an absolute fantastic result, and shows the real power of building a microservice architecture, leveraging containers. So jumping back into our architecture diagrams, this particular one is the deployment that we've just done on EKS, where we've got four containers. We have the presentation layer container, and that makes synchronous calls to our backend services. If you recall, we mentioned that we also had a secondary implementation for the presentation layer, one in which uses a React based model, whereby the front end, as reigned in within the browser, makes direct AJAX calls back to our service components. So let's take a look at this. So this particular implementation exists on the path /home/react. So we'll go back to our front end, and this time we'll navigate to home/react. So this is the result, the first time we make a call to this. So here we're missing a couple of things. We're missing our products, and we're missing the logged on consumer. 

So why is that? So let's jump into our Developer Tools, to examine the network calling. So we'll do a reload. So here we can see that the front end React, AJAX generated network calls have failed, and that the likely issue, as shown here, is the DNS name has not resolved. Now, in this particular setup, I'm using Route 53, and I have these DNS names already registered. I need to go into Route 53, and update them as pointers to the ELB's, that the EKS managed service created for us. So let's jump back into our terminal, and we'll get the address for the accountservice. So this is the ELB address. We'll copy that. We'll jump into AWS Route 53. On hosted zones, we'll go to democloudinc.com. So this is the particular Route 53 record that we need to update. So in the Value box, we'll paste in the ELB address. Clicking Save Record Set. And then we need to repeat this for the inventoryservice. So again we'll go back to the terminal. We'll copy the ELB name that EKS generated for us for the inventoryservice. Back to Route 53. And then updating the API-products DNS record. Save Record Set. Now we need to give this a little bit of time for the previous records to expire. We'll take a copy of one of our records, and we'll jump into our terminal. 

And we'll test to see whether it resolves to our new ELB. So here in the answer section, it's resolving to afbf, and if we scroll up, our account one should be going to afbfe. So that looks good. Likewise, we'll now repeat this for the products inventory endpoint. And this one resolves to a03011. And, that looks good. Okay. So we can now test the full implementation. API-accounts.democloudinc.com/api/consumers/5. Silent flag. Pipe it out to jq. And that has worked. So that is a good result. And we'll also test it for products.

 Remembering the products path is /products. Excellent. So that is all working. So we'll jump back into our browser. We'll go to our front end. And we'll do a reload. So again, when we examine our AJAX calls, we can see something's not quite right. We've managed to resolve the names, and we know that's working, based on the test that we just did in the terminal. But, for some reason, we're not getting our result back. So if we click on one of these links, and we look at the headers, the reason being is that we're actually making our request to an SSL or https endpoint. Now we haven't actually setup https on our ELB, so we need to do that to fix this last problem. 

So let's jump back into the terminal, and we'll run kubectl get services again, -o wide. And then for the accountservice, we'll copy this part. We'll go back to EC2. So under Load Balancers, we'll search for this particular load balancer. So this is our accountservice load balancer. We'll go to Listeners. We can see here that we're quite clearly only setup to listen on port 80, for http calls. So let's edit this. We'll use SSL Secure TCP. And the reason we need to do this is that the instance protocol has to be TCP on the backend. We use the same instance port, which EKS manages for us on our cluster. And we'll pick a SSL certificate, that we've already provisioned with an ACM, in this case our wildcard certificate. Okay, we'll save that. And we'll go close. We'll repeat this also for the inventoryservice. So in this case we'll copy this string here. And this is our inventory ELB. Edit, add, choose SSL, 443, TCP on the backend. Same port.

And same wildcard ACM certificate. And save. Close. And then finally for each ELB, we need to update the corresponding security group on it, to allow incoming 443 connections. Edit, add rule. HTTPS, from everywhere, save. And likewise for the other ELB. Inbound, edit, add. HTTPS, 443, from everywhere, save. Okay, so if we jump back to the terminal, and this time we'll go to https. Enter. Excellent, so now we're able to make https connections via the ELB, to our beacon services. We'll test it for the products endpoint as well. And again, that has worked. So, finally, we'll jump back into our front end. Jump back to Network, all clear. And we'll do a reload. And this time we see that our AJAX calls have a 200 on both of them. If we drill in, we can see the response. Excellent. We'll close that down. So if we look at the user interface, in the top right hand corner, we have the user, which is provided from the accountservice, and we have our products for sale, which comes from the inventoryservice, or the products API endpoint. So again, this is a great result, and shows that we've launched a microservices application using two different user interface rendering techniques. 

One using service-wide ASP.net Core rendering, and the other using client-side React rendering. So again, jumping back into our architecture diagrams, what we've just completed configuring is this particular version of our Store2018 application. So again here, the front end is using React to make client-side AJAX calls, via the AWS EKS created ELBs, all the way through to our backend microservice components, as launched and deployed on our EKS cluster. Okay that completes this demonstration. Go ahead and close it, and we'll see you shortly in the next one.

About the Author

Students11083
Labs28
Courses65
Learning paths15

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.