Perfrormance Testing, Teardown, and CleanUp

The course is part of this learning path

DevOps Engineer – Professional Certification Preparation for AWS
course-steps 35 certification 5 lab-steps 18 quiz-steps 2 description 3
play-arrow
Start course
Overview
DifficultyBeginner
Duration58m
Students789
Ratings
4.4/5
star star star star star-half

Description

The Introduction to AWS EKS course is designed to aid and equip those, with a basic understanding of web-based software development, to know how to quickly launch a new EKS Kubernetes cluster and deploy, manage and measure its attributes.

In this course, you will learn how to utilize a range of new skills from, understanding how EKS implements and deploys clusters into a VPC and leverages ELBs to expose Kubernetes services, to gaining the ability to use, control, manage and measure an EKS Kubernetes cluster deployed application.

This course is made up of 4 in-depth demonstrations that, at the end of the course, will enable you to deploy an end-to-end microservices web-based application into an EKS Kubernetes cluster.

Learning Objectives

  • Understand the basic principles involved with launching an EKS Kubernetes cluster.
  • Analyze how to set up the required EKS client-side tooling required to launch and administer an EKS Kubernetes cluster.
  • Learn how to use the eksctl tool to create, query, and delete an EKS Kubernetes cluster.
  • Follow basic kubectl commands to create, query, and delete Kubernetes Pods and Services.
  • Explain how EKS implements and deploys cluster into a VPC and leverages ELBs to expose Kubernetes services.
  • Learn how to author and structure K8s definition files using YAML.
  • Gain experience in how to deploy an end-to-end microservices based web application into an EKS Kubernetes cluster.
  • Be able to use, control, manage and measure an EKS Kubernetes cluster deployed application.

Prerequisites

  • High-level understanding of web-based software development.
  • Knowledge of Docker and Containers.
  • Prior experience in microservice architectures.

Intended Audience

  • Software Developers.
  • Container and Microservices Administrators and Developers.
  • Cloud System Administrators and/or Operations.

Source Code: Store2018 Microservices

 

Source Code: Store2018 EKS Kubernetes Deployment Files

 

AWS Credential Management 

The terminal based demonstrations provided within this course use the AWS_PROFILE environment variable to specify a named profile for AWS authentication. For more information regarding how this is setup and managed read the following documentation:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

 

Related Training Content

Transcript

- [Instructor] Okay, welcome back. In this demonstration, we'll do a few things before we eventually tear down our EKS cluster and perform a number of cleanup tasks. So the first thing we'll do is perform some port forwarding to one of our particular pods. So jumping into the terminal, we'll first run the command kubectl get pods to get a listing of our pod names. Next, we'll copy the first accounts service pod. We'll do kubectl port-forward, the pod name, and we'll port-forward port 8000 to port 80, which is the container port. Okay, so you can see that port forwarding has activated, so with this in place, we'll split the screen vertically, and then we'll run curl localhost port 8000, and then we'll go to api/consumers/5, and there you can see that we've got our response back, even though we went to local host. 

And the reason this works is because we've set up a role to port forward 8000 through to port 80, and kubectl behind the scenes does this for us. So this is a very useful command when you're wanting to debug and understand the behavior of particular pods, even if you don't have direct network access to them. And we'll close that, and we'll shut down the port forwarding by doing a Control + C. So the next thing we'll do is we'll run kubectl get services -o wide. And then from here we'll run kubectl exec -ti, and this time we'll take the presentation layer pod name. Paste, and then we'll call the command /bin/bash. So what this will do, is it will jump us onto this particular pod and give us a bash terminal. Excellent. So we're now logged onto this particular pod, so we can perform our directory listing.

 We can take a look at the nginx file. And here you can see it's been set up to listen on both port 80 and port 443, and that it's got an ssl certificate already wired into it. What we'll do next is we're perform a tail on the var/log/nginx/access.log file. From here we'll split the screen again. We'll run a curl -I, and we'll copy the DNS name for the ELB that fronts this particular service. We'll post it over here, and we'll click into, and we get our 200 back, and we can see that the curl command has heard our nginx log. So that's great. The next thing we'll do is we'll run Apache Benchmark, and we'll hit it 100 times with a concurrency of 10. And again we use the DNS name. So we'll kick this off, and it won't work. Why is that? Remembering that Apache Benchmark needs a full path to test, we need an extra slash. Okay, so that's working. And then over in the tail we're getting all of our get requests coming through from Apache Bench, and that's completed. And as just seen, the performance of EKS, even though we use elastic load balancers fronting our Kubernetes services, it's still pretty good. Let's rerun the Apache Benchmark. This time we'll do 1000 requests and 20 at a time. And we'll kick it off, and our tailing is updating already. We've completed 100 requests, 200 requests, 300 requests. So we're getting through it pretty quickly. So again you can see that the performance load that we're putting on our EKS cluster is completed. 

So this time let's take a closer look at the metrics that came back as part of our benchmarking. Here you can see that we've done 1000 requests with a concurrency level of 20, and that the EKS cluster performed exceedingly well as per the time per request being less than a second here, it's 875 milliseconds for the mean. Okay, let's close down this window. We'll exit out of our tail, and we'll exit out of the pod. So at this stage, we've done everything we want to with our EKS cluster and we'll begin the tearing down process. So before we do that, let's just see what the status is for our deployments. So again we'll run kubectl get deployments. So, as expected, we have our four deployments that we rolled out originally in the previous demo. So let's now delete all of these. Delete deployments --all. 

So in the background, Kubernetes will delete these deployments and if we take a look at the pods, we should have no more pods. Excellent. Okay, let's now have a look at our services. They will still exist, as they exist outside of the deployments. So to remove all of these services, we run kubectl delete services --all, and again Kubernetes will delete all of the services. And this will also result in each of the four ELBs being deleted. So if we jump into the AWS console and we go to our load balancers, we can see that these are starting to be removed. Okay, excellent. All of our ELBs have now been deleted as a consequence of us running the command kubectl delete services --all. Okay so that this stage what we've got left to delete is the EKS cluster and the worker nodes. So the next thing we'll do is we'll run eksctl get cluster, followed by eksctl delete cluster, and the cluster name, which is returned here. So we've only got one cluster to delete. So in the background, this command will again kick off two CloudFormation stack deletes. 

The first one for the worker nodes or the node group, and then the second one for the EKS cluster itself, which will delete the control plane hosting the master nodes. While we're waiting for this, let's jump into CloudFormation, and we'll also take a look at the EC2 instances view. And here we can see indeed that all of the EKS worker nodes have indeed been terminated. We'll jump back to the console, and we can see that this is already finished. So if we jump over into CloudFormation, and we do a refresh, we can see that the EKS cluster stack is in a delete in progress status. So let's now take a look at the EKS console view. And here we can see that our cloudacademy.k8s cluster is indeed in a deleting status as well. So this will typically take about five to 10 minutes.

 So we'll give it some time before we confirm that everything indeed has gone. Okay, excellent, the EKS cluster has been successfully removed. We'll take a final look at CloudFormation. We can now see that all CloudFormation stack deletes have completed successfully. So we'll jump back into our terminal, and we'll run an eksctl get cluster. Now we expect this to return no clusters, and it does. So we've completed our teardown and everything is now clean. So one last matter of housekeeping that we can do. If you recall, we've got a .kube/config file which contains our connection and authentication data. Since we have teared down our cluster, this no longer is required, and we can simply remove this. Okay, so that completes this demonstration. Go ahead and close it and we'll see you shortly in the next one.

About the Author

Students11942
Labs28
Courses65
Learning paths14

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.