image
Creating an EKS Kubernetes Cluster
Creating an EKS Kubernetes Cluster
Difficulty
Intermediate
Duration
58m
Students
7186
Ratings
4.4/5
starstarstarstarstar-half
Description

The Introduction to AWS EKS course is designed to aid and equip those, with a basic understanding of web-based software development, to know how to quickly launch a new EKS Kubernetes cluster and deploy, manage and measure its attributes.

In this course, you will learn how to utilize a range of new skills from, understanding how EKS implements and deploys clusters into a VPC and leverages ELBs to expose Kubernetes services, to gaining the ability to use, control, manage and measure an EKS Kubernetes cluster deployed application.

This course is made up of 4 in-depth demonstrations that, at the end of the course, will enable you to deploy an end-to-end microservices web-based application into an EKS Kubernetes cluster.

Learning Objectives

  • Understand the basic principles involved with launching an EKS Kubernetes cluster.
  • Analyze how to set up the required EKS client-side tooling required to launch and administer an EKS Kubernetes cluster.
  • Learn how to use the eksctl tool to create, query, and delete an EKS Kubernetes cluster.
  • Follow basic kubectl commands to create, query, and delete Kubernetes Pods and Services.
  • Explain how EKS implements and deploys cluster into a VPC and leverages ELBs to expose Kubernetes services.
  • Learn how to author and structure K8s definition files using YAML.
  • Gain experience in how to deploy an end-to-end microservices-based web application into an EKS Kubernetes cluster.
  • Be able to use, control, manage and measure an EKS Kubernetes cluster deployed application.

Prerequisites

  • High-level understanding of web-based software development.
  • Knowledge of Docker and Containers.
  • Prior experience in microservice architectures.

Intended Audience

  • Software Developers.
  • Container and Microservices Administrators and Developers.
  • Cloud System Administrators and/or Operations.

Source Code: Store2018 Microservices

 

Source Code: Store2018 EKS Kubernetes Deployment Files

 

AWS Credential Management 

The terminal based demonstrations provided within this course use the AWS_PROFILE environment variable to specify a named profile for AWS authentication. For more information regarding how this is setup and managed read the following documentation:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

 

Related Training Content

Transcript

Okay, welcome back. In this demonstration, we're going to create our first AWS managed Kubernetes cluster. We're going to use the EKS CTL tool, which we installed in the previous demonstration to do all the heavy handling for us. So before we start, let's just quickly review how EKS CTL is used to create clusters. So on their website, it's very well documented in terms of the parameters that can be used. You can simply create one by running eksctl_create_cluster, and that cluster will kick off with a number of defaults. Including, it will provision two times m5.large nodes for the workers. And it will use the AWS EKS official AMI image. And the placement will be into the US west two Oregon region. 

Beyond that, you can customize further the provisioning process for your cluster. For example, you can specify a custom name for your cluster, and you can specify the number of nodes or worker nodes that you want. Another interesting thing you can do is to do auto scaling for the worker nodes. So in this case you're setting the --nodes -min to three, and at the other end, you're setting --nodes -max equal to five. So that will create an auto scaling group for the worker nodes and will scale in and out between three and five. Okay, let's jump into the terminal and we'll begin the process. So we'll type eksctl create cluster. I'll give it a custom name, we'll call ours cloudacademy-Kubernetes. We'll put it in the Oregon region, and we'll specify the ssh key that we'll use to ssh onto the worker nodes. And finally, we'll specify the number of worker nodes that we want. In this case we'll go with four, and we'll specify that the worker node type will be m5.large. 

Okay. So kicking that off, in the background, eksctl will start provisioning the Kubernetes cluster. So straightaway we start to get some feedback. A couple of interesting things you'll notice, eksctl is using CloudFormation, and specifically it's going to launch two CloudFormation stacks. So currently right now it's launching the first of the two stacks and this will be for the provisioning of the AWS Kubernetes managed service control plane, which will contain the Kubernetes master nodes. Once this CloudFormation stack completes, eksctl will then kick off the provisioning using CloudFormation again to create the worker nodes and join them into our cluster. So we'll let that bake. It will take about 10 minutes. Okay, excellent. That has fully completed, and we now have an AWS managed service Kubernetes cluster. Looking at the timings, you can see that we started roughly at 13:30 and we finished at 13:45, so that's about 15 minutes for the end to end process to complete. 

So it's not instantaneous, but having that said that, to have a fully working Kubernetes cluster created in 15 minutes is still something to be very happy about. So again reviewing the output, a couple of things that we should take note of. So this particular cluster stack creates the managed service control plane into which the Kubernetes master nodes are provisioned. The second stack is the worker node stack, into which our four worker nodes will be created and provisioned in. Down here you can see each of the four nodes. And also that eksctl has updated our cube/config file with the connection information for our cluster. So let's take a look at this file. You can see here that we have a cluster, the certificate of authority data has been pasted in. We've got the server end point. At this stage we can now simply run kubectl and we could do get services. And kubectl will have been configured to use our cluster that we've just provisioned. And here you can see that we've got output from our AWS managed service Kubernetes cluster, which is an excellent outcome. Again, we can rerun the same command, and this time we'll add a --all namespaces. And here we can see a couple of services that run as part of the cluster. Okay, let's jump over into the AWS console. And the first thing we'll do is we'll take a look at CloudFormation, and in here we should see our two CloudFormation stacks that were created, and indeed we do. So the first one, again, is for the control plane into which the master nodes are provisioned, and the second one creates the worker nodes that are then joined into the cluster. 

Okay, let's now take a look at the EKS console. So we navigate into it, we click on clusters, and here we can see the cloudacademy.k8s cluster that we just created. So we'll click it. So here we can see all of the specific settings for the cluster itself. In particular we've got the API server end point, and the certificate authority. Now jumping back into the terminal, again if we have a look at the .kube/config file, you'll see that the certificate authority data here is the exact piece of data that is represented here. Likewise with the API server end point that is represented here. And this is the beauty of the EKS CTL tool. 

It performs all this wiring and plumbing for us, so that we don't have to manually configure the config file. The end result is that this information is used to perform both the connection and the authentication to the Kubernetes cluster. Here we can see the cluster name. And the user, where this user, under the user section, uses the aws-iam-authenticator, and in doing so is able to establish authentication against the Kubernetes cluster, and once that is complete, we can then perform the kubectl commands to it. Jumping back into the AWS console, let's now go to the EC2 service, and here we'll be able to see our worker nodes. So if we order by name, you can see here that we've got our four worker nodes. And that they are a m5.large, and are distributed across each of the availability zones and the VPC. Now the VPC that hosts these worker nodes was created as part of the eksctl create cluster command. 

So selecting the first worker node, if we take a closer look at it, we can see that it has private IP of 192.168.149.200, and then it's being provisioned with many secondary private IPs, and all of these IPs are bound to the first Ethernet interface, eth0. So all of these secondary private IPs will be used by the AWS Kubernetes CNI plugin, and they will be allocated to each of the pods that spin up on this particular worker node. Jumping back to the terminal, let's take a look at the resources that we created for each stack. So we need to give it the stack name. So here we're using the AWS CLI and the stack name we can retrieve from our output from the create cluster command. So we'll take the first one, and pipe it out to JQ. And we also need to specify a region. So here you can see all of the resources that were created. So we're creating an AWS EKS cluster, some security groups, we're creating an Internet gateway, an IAM policy, a route, route table, subnet route table association, some subnets, and the VPC that hosts all of them. 

So on the second one, which is our node group, let's take a look. So we take the name of the node group stack. Enter. And this time we're creating some security group egress roles, ingress roles, an auto scanning group for the nodes, an instance profile, launch configuration, an IAM policy, and then a security group. So that gives you some background as to what the EKS CTL create cluster command actually does and how it does it. Okay, that completes this demonstration. Go ahead and close it and I'll see you shortly in the next one, where we'll start using our Kubernetes cluster and start launching some resources into it.

About the Author
Students
132607
Labs
68
Courses
112
Learning Paths
183

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).