Elastic Kubernetes Service (EKS)
Elastic Load Balancing
Managing SSL certificates using AWS Certificate Manager
AWS EC2 Auto Scaling Lifecycle Hooks
The course is part of this learning path
This course covers the core learning objective to meet the requirements of the 'Designing Compute solutions in AWS - Level 3' skill
- Evaluate and enforce secure communications when using AWS elastic load balancers using HTTPS/SSL listeners
- Evaluate when to use a serverless architecture compared to Amazon EC2 based upon workload performance optimization
- Evaluate how to implement fully managed container orchestration services to deploy, manage, and scale containerized applications
Okay, welcome back. In this demonstration, we're going to create our first AWS managed Kubernetes cluster. We're going to use the EKS CTL tool, which we installed in the previous demonstration to do all the heavy handling for us. So before we start, let's just quickly review how EKS CTL is used to create clusters. So on their website, it's very well documented in terms of the parameters that can be used. You can simply create one by running eksctl_create_cluster, and that cluster will kick off with a number of defaults. Including, it will provision two times m5.large nodes for the workers. And it will use the AWS EKS official AMI image. And the placement will be into the US west two Oregon region.
Beyond that, you can customize further the provisioning process for your cluster. For example, you can specify a custom name for your cluster, and you can specify the number of nodes or worker nodes that you want. Another interesting thing you can do is to do auto scaling for the worker nodes. So in this case you're setting the --nodes -min to three, and at the other end, you're setting --nodes -max equal to five. So that will create an auto scaling group for the worker nodes and will scale in and out between three and five. Okay, let's jump into the terminal and we'll begin the process. So we'll type eksctl create cluster. I'll give it a custom name, we'll call ours cloudacademy-Kubernetes. We'll put it in the Oregon region, and we'll specify the ssh key that we'll use to ssh onto the worker nodes. And finally, we'll specify the number of worker nodes that we want. In this case we'll go with four, and we'll specify that the worker node type will be m5.large.
Okay. So kicking that off, in the background, eksctl will start provisioning the Kubernetes cluster. So straightaway we start to get some feedback. A couple of interesting things you'll notice, eksctl is using CloudFormation, and specifically it's going to launch two CloudFormation stacks. So currently right now it's launching the first of the two stacks and this will be for the provisioning of the AWS Kubernetes managed service control plane, which will contain the Kubernetes master nodes. Once this CloudFormation stack completes, eksctl will then kick off the provisioning using CloudFormation again to create the worker nodes and join them into our cluster. So we'll let that bake. It will take about 10 minutes. Okay, excellent. That has fully completed, and we now have an AWS managed service Kubernetes cluster. Looking at the timings, you can see that we started roughly at 13:30 and we finished at 13:45, so that's about 15 minutes for the end to end process to complete.
So it's not instantaneous, but having that said that, to have a fully working Kubernetes cluster created in 15 minutes is still something to be very happy about. So again reviewing the output, a couple of things that we should take note of. So this particular cluster stack creates the managed service control plane into which the Kubernetes master nodes are provisioned. The second stack is the worker node stack, into which our four worker nodes will be created and provisioned in. Down here you can see each of the four nodes. And also that eksctl has updated our cube/config file with the connection information for our cluster. So let's take a look at this file. You can see here that we have a cluster, the certificate of authority data has been pasted in. We've got the server end point. At this stage we can now simply run kubectl and we could do get services. And kubectl will have been configured to use our cluster that we've just provisioned. And here you can see that we've got output from our AWS managed service Kubernetes cluster, which is an excellent outcome. Again, we can rerun the same command, and this time we'll add a --all namespaces. And here we can see a couple of services that run as part of the cluster. Okay, let's jump over into the AWS console. And the first thing we'll do is we'll take a look at CloudFormation, and in here we should see our two CloudFormation stacks that were created, and indeed we do. So the first one, again, is for the control plane into which the master nodes are provisioned, and the second one creates the worker nodes that are then joined into the cluster.
Okay, let's now take a look at the EKS console. So we navigate into it, we click on clusters, and here we can see the cloudacademy.k8s cluster that we just created. So we'll click it. So here we can see all of the specific settings for the cluster itself. In particular we've got the API server end point, and the certificate authority. Now jumping back into the terminal, again if we have a look at the .kube/config file, you'll see that the certificate authority data here is the exact piece of data that is represented here. Likewise with the API server end point that is represented here. And this is the beauty of the EKS CTL tool.
It performs all this wiring and plumbing for us, so that we don't have to manually configure the config file. The end result is that this information is used to perform both the connection and the authentication to the Kubernetes cluster. Here we can see the cluster name. And the user, where this user, under the user section, uses the aws-iam-authenticator, and in doing so is able to establish authentication against the Kubernetes cluster, and once that is complete, we can then perform the kubectl commands to it. Jumping back into the AWS console, let's now go to the EC2 service, and here we'll be able to see our worker nodes. So if we order by name, you can see here that we've got our four worker nodes. And that they are a m5.large, and are distributed across each of the availability zones and the VPC. Now the VPC that hosts these worker nodes was created as part of the eksctl create cluster command.
So selecting the first worker node, if we take a closer look at it, we can see that it has private IP of 192.168.149.200, and then it's being provisioned with many secondary private IPs, and all of these IPs are bound to the first Ethernet interface, eth0. So all of these secondary private IPs will be used by the AWS Kubernetes CNI plugin, and they will be allocated to each of the pods that spin up on this particular worker node. Jumping back to the terminal, let's take a look at the resources that we created for each stack. So we need to give it the stack name. So here we're using the AWS CLI and the stack name we can retrieve from our output from the create cluster command. So we'll take the first one, and pipe it out to JQ. And we also need to specify a region. So here you can see all of the resources that were created. So we're creating an AWS EKS cluster, some security groups, we're creating an Internet gateway, an IAM policy, a route, route table, subnet route table association, some subnets, and the VPC that hosts all of them.
So on the second one, which is our node group, let's take a look. So we take the name of the node group stack. Enter. And this time we're creating some security group egress roles, ingress roles, an auto scanning group for the nodes, an instance profile, launch configuration, an IAM policy, and then a security group. So that gives you some background as to what the EKS CTL create cluster command actually does and how it does it. Okay, that completes this demonstration. Go ahead and close it and I'll see you shortly in the next one, where we'll start using our Kubernetes cluster and start launching some resources into it.
Software Development has been my craft for over 2 decades. In recent years, I was introduced to the world of "Infrastructure as Code" and Cloud Computing.
I loved it! -- it re-sparked my interest in staying on the cutting edge of technology.
Colleagues regard me as a mentor and leader in my areas of expertise and also as the person to call when production servers crash and we need the App back online quickly.
My primary skills are:
★ Software Development ( Java, PHP, Python and others )
★ Cloud Computing Design and Implementation
★ DevOps: Continuous Delivery and Integration