1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Deploying a Simple Business Application for Pizza Time

How we can increase the Availability and Scalability of Pizza Time


Welcome to Pizza Time!
Deploying the First Iteration of Our Business Application
Adding Analysis, Monitoring and Cost Management
Start course
Duration1h 44m


In this group of lectures we will introduce you to the Pizza Time business and system requirements. Pizza Time requires a simple ordering solution that can be implemented quickly and with minimal cost, so we will do a hands deployment of version 1.0 of our solution. To achieve that we are going to use a single region and deploy an application using AWS Elastic Beanstalk. We will implement a simple database layer that will be capable of delivering on these initial requirements for Pizza Time. 

It turns out that the Pizza Time business is a success and the business now wants to target a global audience. We discuss and design how we can increase the availability of our initial Pizza Time application, then begin to deploy V2 of Pizza Time - a highly available, fault-tolerant business application. 



Hi, and welcome to this lecture.

In this lecture we're going to implement high availability in our Pizza Time application. We're going to have a very quick planning overview using slides, then we'll go to the AWS console and have a hands-on demonstration on how to implement high availability in our Pizza Time application.

So, this is how our Pizza Time application is set. By now we have a single instance running our application, and we have a single database instance storing our data. That's really not scalable, it's not fault-tolerant, it's not high available, and we need to fix that.

Imagine that we have been using this architecture for a while and we have experienced problems with availability in our application, we have been experiencing a few bottlenecks in our application. And also, our application is not really elastic, our bills are always the same, even in a month where nobody is using our delivery system, we're receiving bills with almost the same amount.

And we don't want that, we want to have an application that will be elastic enough to handle our traffic, that will be fault-tolerant, that will be high available, but that will also be cost-effective for us. So we need to change a few things. We want to have an application that can handle outages in a single availability zone. That's very important for us, we don't want to see our application going down if the availability zone that is hosting it goes down.

We are prepared to accept the fact that our application will go down if the entire region fails. That's okay for us at this point. We don't want to create a cross-region deployment, we're going to see this later on.

So at this point we want our application to only go down when the entire region fails. So, by that we mean that our application or architecture will handle EC2 instance failures and also RDS instance failures, and that should be done automatically, we don't want to have to go to the console every time and fix the problems that may appear over time. So, after putting some thought on the issue, we decided to go with this approach.

We're going to move away from a single instance application and we are going to use auto-scaling to help us scale up and down the number of instances.

And we're going to use a multi-AZ database. But to use auto-scaling we need to configure an elastic load balancer to handle how traffic will be forwarded to our instances. So that's our deployment.

We only have one consideration to make in regards of this implementation: we want to have the minimum downtime possible. We don't want to stop the application for a longer period of time, we would like actually to have zero downtime in this case.

But our company's prepared to have a few hours of downtime, that would be okay. So what we will do is, we're going to implement high availability using Elastic Beanstalk and still using RDS.

So let's first handle our RDS database. I'll go to the RDS console, and if I select the instance that we have, here on Instance Actions I can click on “Modify”. The only thing that we need to make is we need to specify it should be a multi-AZ deployment. So we change it to Yes. If we continue and let AWS handle this, AWS will handle this change in the next maintenance window, but we want to apply that immediately so we need to select this checkbox, click on Continue and modify the instance. Now AWS will handle the multi-AZ deployment for us, we don't need to do anything else.

You can see that the status is changed to modifying, but if we go to our application we can see that it is still available and we can still make database calls. So our RDS database is still working, we can still get and write data in it because what is happening is we are not unplugging, we are not changing the size of this instance, we are not doing any of that. What is happening is AWS is creating a new replica in another availability zone, and AWS will start syncing these two databases, and one database will be the master database, the other one will be the slave database. But in the meantime we can still write and read data from our database because the master database is still working, it's still receiving connections. So we don't really need to do much on RDS, and there is no downtime in this part of our implementation.

Let's now go in the Elastic Beanstalk console. And in here we have our Pizza Time application, and we have our Pizza Time environment inside this application. We could click on the environment and we could change its configuration, and we could move away from a single instance environment type and select load balancing and auto-scaling. And that will rebuild our environment using elastic load balancing and auto-scaling, but that will also generate a downtime, and we don't want that, we want to avoid downtimes doing this implementation. So we have a few options in here. We could click in this environment and clone this environment, but that would create another environment with a single instance, and then we could change this environment to a load balancing and auto-scaling environment, but that would take more time.

Instead I want to do the following: I will just simply create a new environment and I select Web Server environment as I did the last time. Same thing here, Python, I want the version 2.7. And this time I want elastic load balancing and auto-scaling in my application.

So I will select Next, and the thing here is that we can use the existing application version. When we upload an application version to Elastic Beanstalk we are not uploading this application version to the environment inside Elastic Beanstalk, but we are uploading this application version to the whole application, so that version is available throughout all the environments that you can have.

That's very handy in situations like that, because by having the application version in here, and since we created the configuration documents inside our application version, we don't really need to do much to deploy our application in another environment.

So I will use the existing application version. And now we need to select a few things in here.

We need to specify a deployment policy. I want to use “Rolling” because I want to specify how many instances I want to roll the updates that will roll in this environment. So I will select “Rolling”.

And in here we need to select a healthy threshold. We don't need to worry much about all that stuff but what that means is we can specify in here when Elastic Beanstalk will move to the next instance. So the healthy threshold that we want is Ok, and we don't want to ignore health check, we want to have Elastic Beanstalk deploying our application version to all the instances inside our environment, and if the environment is healthy, if we receive an Ok status from that particular instance, we can move to the next instance.

And I don't want to use a percentage in here, what I want is to have Elastic Beanstalk deploying this application to one instance at a time. This is not a really big environment, we are just going to use a few instances to run it from now on, so one instance at a time will be enough for us.

I'll click on Next, and we need to specify an environment name. This time I will say that this is the high available Pizza Time environment. And I'll click on Next.

We already have an RDS database, we don't care where AWS will launch this application, so we will not select create this environment inside a VPC, we just want to launch this environment in the default VPC. So we can leave both of these checkboxes unmarked.

And in here we select the instance type that you want, I will stick with “m3.medium”, I will stick with our “pizza-time” key pair, and we don't need to understand these other settings.

The health reporting, we already talked about it in the first video of this course. The root volume type as well, so I'll click on Next.

We don't want to add any environment tags and we will continue using the same instance profile and the same service role.

And now we can review our deployment, but I will simply click on Launch. That will take some time to deploy, so I will stop the recording and get back once it's done.

Okay, so our new environment was deployed, we can check the results in this URL. We can see that we can login the application, we can see the orders, we can see the details for the orders, we could even create a new order, as we would be able to do in the main application.

And, why I say main application? I say this because we are using the Elastic Beanstalk URL, we are not using our domain URL, because right now Route 53 is not sending the request to this environment, it's sending the request to the URL of the other environment, the first environment that we created that is running our application with a single instance.

Should help us achieve what we want, Elastic Beanstalk has a very cool feature: we can swap the URLs from environments. So we can click in here and we can select the other environment that we have. And what will do with this is we will change these environment URLs between these environments that we have. So we can click on Swap, and it will take some time until Elastic Beanstalk changes the DNS entries for these two environments.

Wow, that was fast! And now we can check in here, if we go to this URL, that it is working and we are accessing, actually, the high available Pizza Time URL. We are not using the old environment. So we can simply go in here and terminate this environment. But, in a production situation, you need to make sure you configure your “time-to-leave” entry in your Route 53 domain to the minimum possible just to make sure that people won't get forwarded to the old URL instead, and won't get forwarded to the old environment instead of the new environment.

So, for our purposes I will just terminate this environment. And we can go to the main domain while you're terminating the old environment, because at this point we are forwarding all the requests to the high available Pizza Time environment, which is using the old URL that we specified.

Remember that we are using pizza as our environment URL. That's what's happened here, we are sending the request to the same URL, but this URL is sending the request to the new environment.
That might be a bit confusing to understand, but the truth is you don't really need to master Elastic Beanstalk in order to succeed this exam. I'm showing you here a bit more than you need to know for the exam, but, come on, this is super cool, this is awesome, everything that we can do with Elastic Beanstalk.

We didn't have to configure an elastic load balancing or configure an auto-scaling rule in order to launch this new environment, Elastic Beanstalk did everything for us.

...And we can customize a little bit the configurations if we go here in “Configuration”. And in here we can select if you want to have at least one instance running at a time, or we could say we want to have at least two instances. We can select the availability zones that we want. I'll click on Apply.

About the Author

Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.