Deployment and Provisioning
In this group of lectures we run a hands on deployment of the next iteration of the Pizza Time solution. The Pizza Time business has been a success. It needs to support more customers and wants to expand to meet a global market.
We define our new solution, then walk through a hands on deployment that extends our scalability, availability and fault tolerance.
Hi and welcome to this lecture.
In this lecture we will do the EC2 deployment. First we will define the goals, and then we are going to have a demo on how to deploy the EC2 part. We will configure a few things on Route 53, then we are to create an ELB, we are going to configure a Launch configuration, we are going to create an Auto Scaling Group, and we are going to configure our Scaling policies, and we are also going to create some Scheduled actions for Auto Scaling.
So, talking about our application. It needs to be high available, we want to have our application up and running all the time. So we are focusing most of our efforts in that. We are going to have the angular app running on a web distribution and using a bucket. That's because S3 has incredibly durability and incredibly availability. So we don't need to worry about our angular app.
So our main efforts will be taking care of our EC2 instances that will be running our API backend. So, in this lecture we are going to deploy an elastic load balancer, we are going to configure an Auto Scaling Group, and we are going to manage all the application side. We are going to manage everything that will be inside the EC2 instance, and we are going to manage all the API part of our application.
So, as I said a few lectures ago, Pizza Time is a success, the business has been growing a lot, and we have franchise all over the American continent. And we've been noticed, that we will need at least two instances on each region to handle the minimum traffic. We need to have at least two instances running all the time. If the traffic grows, then we can scale up and scale later on down, but at least two instances will need to be running every time.
And also, every day between 11pm and 6am UTC, we are experiencing some high traffic. This is probably the hour where people are getting out of their jobs and they're arriving home, and they are tired, and they don't want to cook, and they are ordering pizza. This is the time where we're experiencing more traffic, more demand on our application.
So, we need to deal with that. We don't want to have a simple scaling policy in this case, because we know that this is going to happen. So, instead, we are going to configure some scheduled actions.
So, every day, on that time, we will add four instances on each region to handle this peak traffic. So, again, we are going to configure scaling policies to handle our minimum traffic and just to make sure that if something fails, Auto Scaling will handle the process of launching new instances in a different availability zone and so on.
And, we are also going to configure scheduled actions. We are going to configure Auto Scaling to have at least six instances running at our high traffic times just to make sure that people will receive their pizzas.
So let's now go to AWS console and do the EC2 deployment. And we can check in here, in the Activity History, what has happened with our Auto Scaling Group. And we can see that we have a problem that wasn't really planned. Let me take a look on what is happening. Okay, okay. I know what might be happening. When we created the Launch configuration, we created a new Security group. But, we weren't able to specify the VPC ID associated with that Security group. And a Security group must live inside a VPC. You can see in here that you have the Security group ID and you have the VPC ID. So each Security group will live inside a VPC. And currently it's not living in the right VPC, so we need to change that. So I'll create a new Security group, will be called pizza time EC2 security group right one And we need to provide a description, security group for EC2. We now have the ability to select the right VPC, so it's the Pizza Time VPC, and we can add some rules. I just want to have an http rule allowing access to everybody.
And we also need to change our Launch configuration. Again, this wasn't planned, but this is awesome. There is something that might be a bit frustrating, but you can easily overcome that, is that you can't change Launch configuration.
What you can do instead is you can copy a particular Launch configuration, and you'll be forwarded to the same wizard that we have to pass through the other time. But we are forwarded to the review page, and we know that, in this page, we can add the details. So, I will just rename.
I will say again that this is the right one, and I want to also change the Security group of this Launch configuration. So we select the right Security group. Click on Review, Continue, and create the Launch configuration.
Now we close this, and we go to our Auto Scaling Groups. We select the Pizza Time Auto Scaling Group, select, add. And in here, on Launch configuration, we can specify a new Launch configuration. So I just specify the right one, and I hope that things will start to get better now. So let's take a look at the Activity History, and let's refresh it for a while.
So, okay, now we are launching new instances. We have two new instances, but they are not yet in service.
Another thing that we can do in here, in the Auto Scaling Group console, is we can define scaling policies and scheduled actions. Scaling policies will be associated with a CloudWatch alarm, and you can scale up and down, depending on the state of the alarm.
So, let's add a new policy. And I will say that this is the Scale up policy, and we need to associate this policy with an alarm. But we don't have any alarms yet for this particular Auto Scaling Group. So we need to create a new alarm. And I don't want to receive notifications for this alarm. And the metric that I want to evaluate is the Network in. I will say that when the network is, for example, 11 thousand bytes, five consecutive periods, I want to scale up my Auto Scaling Group.
So, in this case, I will add one instance every time that we have this threshold met. So click on Create. And I also want to add another policy, that will be the Scale down policy. Again, we need to create a new alarm, and I don't want to receive notifications, and, again, we said that the average would be higher than 11 thousand bytes. And right now I want to say that the average will be less than, let's say, three thousand bytes. For, again, five consecutive periods. And create the alarm. And with this alarm, when this threshold has been met, I want to remove one instance. So, every time we have less connections, we will remove one instance. But, in here, in Details, we specified that the minimum amount of instance will be two, the desired will be two. But also the max number of instances will be two.
So, in order to scale up and down, we need to change these configurations. So, I still want the desired number to be two, the minimum number to be two, but I'll say that the max number for this group will be eight instances running. So we can't launch more than eight instances with this Auto Scaling Group, unless we change this configuration. Even if our Network in is below three thousand bytes, every single time, we will not have less than two instances running, because we set so.
So, I can click on Save, and now our Scaling policies are configured. But remember that we have a peak traffic time, so we need to configure a few scheduled actions. And to configure scheduled actions, we need to click on the Scheduled Actions tab, with the Auto Scaling Group selected, and we will click on Create Scheduled Action.
That will be our schedule, and we'll say that the minimum capacity will be six, the maximum capacity will be eight, and the desired capacity will be six. And that will happen every day.
Now, we need to specify a starting date and an end time. So, I will select that we want to have this rule by tomorrow, and that will happen every day on 11pm on the UTC. And that will end, let's say, on October. In October, we can reevaluate this rule, or before October, we can change this rule. But, for now, I will say that this rule will be valid till October.
So we want to have our instances scaling up at 11pm UTC, and scaling down at 6am UTC. I can create the schedule, and now we have a new schedule. So every day we will have these scheduled actions performing, increasing the number of instances and also decreasing the number of instances in the specified time.
These scheduled actions are also great for marketing campaigns, for example. Imagine that we have, we reduced the price of our pizzas. For the next weekend, all pizzas will have half price. And we expect a lot of traffic on our website, and to anticipate that traffic, and to anticipate that, we can create a scheduled action that will increase the number the instances during that weekend, and then will decrease the number of instances again.
That's very useful, and it's better than having a very generally scaling policy because, even when we configure scaling policies, that will take some time to launch all the instances we need. So, if we know the number of instances we need, we can specify that right away, using scheduled policies.
Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.