image
Performance Testing, Teardown, and CleanUp
Start course
Difficulty
Advanced
Duration
1h 27m
Students
62
Ratings
3/5
Description

This course covers the core learning objective to meet the requirements of the 'Designing Compute solutions in AWS - Level 3' skill

Learning Objectives:

  • Evaluate and enforce secure communications when using AWS elastic load balancers using HTTPS/SSL listeners
  • Evaluate when to use a serverless architecture compared to Amazon EC2 based upon workload performance optimization
  • Evaluate how to implement fully managed container orchestration services to deploy, manage, and scale containerized applications
Transcript

- [Instructor] Okay, welcome back. In this demonstration, we'll do a few things before we eventually tear down our EKS cluster and perform a number of cleanup tasks. So the first thing we'll do is perform some port forwarding to one of our particular pods. So jumping into the terminal, we'll first run the command kubectl get pods to get a listing of our pod names. Next, we'll copy the first accounts service pod. We'll do kubectl port-forward, the pod name, and we'll port-forward port 8000 to port 80, which is the container port. Okay, so you can see that port forwarding has activated, so with this in place, we'll split the screen vertically, and then we'll run curl localhost port 8000, and then we'll go to api/consumers/5, and there you can see that we've got our response back, even though we went to local host. 

And the reason this works is because we've set up a role to port forward 8000 through to port 80, and kubectl behind the scenes does this for us. So this is a very useful command when you're wanting to debug and understand the behavior of particular pods, even if you don't have direct network access to them. And we'll close that, and we'll shut down the port forwarding by doing a Control + C. So the next thing we'll do is we'll run kubectl get services -o wide. And then from here we'll run kubectl exec -ti, and this time we'll take the presentation layer pod name. Paste, and then we'll call the command /bin/bash. So what this will do, is it will jump us onto this particular pod and give us a bash terminal. Excellent. So we're now logged onto this particular pod, so we can perform our directory listing.

 We can take a look at the nginx file. And here you can see it's been set up to listen on both port 80 and port 443, and that it's got an ssl certificate already wired into it. What we'll do next is we're perform a tail on the var/log/nginx/access.log file. From here we'll split the screen again. We'll run a curl -I, and we'll copy the DNS name for the ELB that fronts this particular service. We'll post it over here, and we'll click into, and we get our 200 back, and we can see that the curl command has heard our nginx log. So that's great. The next thing we'll do is we'll run Apache Benchmark, and we'll hit it 100 times with a concurrency of 10. And again we use the DNS name. So we'll kick this off, and it won't work. Why is that? Remembering that Apache Benchmark needs a full path to test, we need an extra slash. Okay, so that's working. And then over in the tail we're getting all of our get requests coming through from Apache Bench, and that's completed. And as just seen, the performance of EKS, even though we use elastic load balancers fronting our Kubernetes services, it's still pretty good. Let's rerun the Apache Benchmark. This time we'll do 1000 requests and 20 at a time. And we'll kick it off, and our tailing is updating already. We've completed 100 requests, 200 requests, 300 requests. So we're getting through it pretty quickly. So again you can see that the performance load that we're putting on our EKS cluster is completed. 

So this time let's take a closer look at the metrics that came back as part of our benchmarking. Here you can see that we've done 1000 requests with a concurrency level of 20, and that the EKS cluster performed exceedingly well as per the time per request being less than a second here, it's 875 milliseconds for the mean. Okay, let's close down this window. We'll exit out of our tail, and we'll exit out of the pod. So at this stage, we've done everything we want to with our EKS cluster and we'll begin the tearing down process. So before we do that, let's just see what the status is for our deployments. So again we'll run kubectl get deployments. So, as expected, we have our four deployments that we rolled out originally in the previous demo. So let's now delete all of these. Delete deployments --all. 

So in the background, Kubernetes will delete these deployments and if we take a look at the pods, we should have no more pods. Excellent. Okay, let's now have a look at our services. They will still exist, as they exist outside of the deployments. So to remove all of these services, we run kubectl delete services --all, and again Kubernetes will delete all of the services. And this will also result in each of the four ELBs being deleted. So if we jump into the AWS console and we go to our load balancers, we can see that these are starting to be removed. Okay, excellent. All of our ELBs have now been deleted as a consequence of us running the command kubectl delete services --all. Okay so that this stage what we've got left to delete is the EKS cluster and the worker nodes. So the next thing we'll do is we'll run eksctl get cluster, followed by eksctl delete cluster, and the cluster name, which is returned here. So we've only got one cluster to delete. So in the background, this command will again kick off two CloudFormation stack deletes. 

The first one for the worker nodes or the node group, and then the second one for the EKS cluster itself, which will delete the control plane hosting the master nodes. While we're waiting for this, let's jump into CloudFormation, and we'll also take a look at the EC2 instances view. And here we can see indeed that all of the EKS worker nodes have indeed been terminated. We'll jump back to the console, and we can see that this is already finished. So if we jump over into CloudFormation, and we do a refresh, we can see that the EKS cluster stack is in a delete in progress status. So let's now take a look at the EKS console view. And here we can see that our cloudacademy.k8s cluster is indeed in a deleting status as well. So this will typically take about five to 10 minutes.

 So we'll give it some time before we confirm that everything indeed has gone. Okay, excellent, the EKS cluster has been successfully removed. We'll take a final look at CloudFormation. We can now see that all CloudFormation stack deletes have completed successfully. So we'll jump back into our terminal, and we'll run an eksctl get cluster. Now we expect this to return no clusters, and it does. So we've completed our teardown and everything is now clean. So one last matter of housekeeping that we can do. If you recall, we've got a .kube/config file which contains our connection and authentication data. Since we have teared down our cluster, this no longer is required, and we can simply remove this. Okay, so that completes this demonstration. Go ahead and close it and we'll see you shortly in the next one.

About the Author
Avatar
Carlos Rivas
Sr. AWS Content Creator
Students
820
Courses
15
Learning Paths
1

Software Development has been my craft for over 2 decades. In recent years, I was introduced to the world of "Infrastructure as Code" and Cloud Computing.
I loved it! -- it re-sparked my interest in staying on the cutting edge of technology.

Colleagues regard me as a mentor and leader in my areas of expertise and also as the person to call when production servers crash and we need the App back online quickly.

My primary skills are:
★ Software Development ( Java, PHP, Python and others )
★ Cloud Computing Design and Implementation
★ DevOps: Continuous Delivery and Integration