Welcome to Pizza Time!
Deploying the First Iteration of Our Business Application
Adding Analysis, Monitoring and Cost Management
Pizza Time is Going Global!
In this group of lectures we will introduce you to the Pizza Time business and system requirements. Pizza Time requires a simple ordering solution that can be implemented quickly and with minimal cost, so we will do a hands deployment of version 1.0 of our solution. To achieve that we are going to use a single region and deploy an application using AWS Elastic Beanstalk. We will implement a simple database layer that will be capable of delivering on these initial requirements for Pizza Time.
It turns out that the Pizza Time business is a success and the business now wants to target a global audience. We discuss and design how we can increase the availability of our initial Pizza Time application, then begin to deploy V2 of Pizza Time - a highly available, fault-tolerant business application.
Hi and welcome to this lecture.
In this lecture we are going to discuss the new business scenario for Pizza Time. Then we are going to do a quick analysis on our current infrastructure to identify a few issues, and then we are going to discuss the next steps for this course.
So first, Pizza Time is a success.
People enjoy the application and they enjoy the pizza even more, and Pizza Time has started to open a new franchise in America, but they are experiencing so much success that they decided to open franchises all over the continent. So they are opening a franchises in Canada, in Mexico, in Venezuela, in Brazil.
And what is happening is, they are starting to experience some problems with the application because the main issue here is that it's only one application for everybody. And although we configured an Auto Scaling group for our application to handle traffic, that's really not enough for an application this big. So we need to change this infrastructure.
Currently, we have the Pizza Time domain, sending all the requests to a single load balancer. And we have a few instances running on an Auto Scaling group, and although it can scale up and down, that's also not enough. We are using instances “m3.medium” and they are not enough for the size of the application. And the same applies for the database. We are using the “m3.medium” database. Also, for RDS.
So, that is an issue because we don't have enough power to serve all the requests. Also, besides that we have security issues. Our database is open to the world. That's really not a best practice. And we don't have much control over the elastic beanstalk application. Although we can configure how we want to scale, how we want to deploy new things to the instances, we don't have enough performance control over our infrastructure. So we need to change that.
We want to have power to customize and tune our application. So, this is the design that we are proposing here. We are going to still use a single domain for all the requests. But we are going to host our application in two regions. We are going to use the Oregon region and the San Paulo region.
We are going to have kind of the same application in both regions. We are going to still have a set of EC2 instances running beside an Auto Scaling group. We are still going to have an elastic load balancer for both regions. But the main database will be hosted in the Oregon region. We are going to have a multi-AZ configuration so in case something fails in the Oregon region we are going to have a backup on this region. And we are also going to configure a cross-region read replica in the San Paulo region. So all the region requests coming from customers served by the application in the San Paulo region are going to be forwarded to the read replica and not to the main database.
But, all the right requests are going to be still sent to the main database. So, we can offload a little bit the traffic of our database. But, we still have some latency when writing new things to our database. But that will be acceptable for us.
So, what we can do is we can host all these files in an S3 bucket and we can create a web distribution on CloudFront to serve our app. So our page will be served through CloudFront and only the API calls will be forwarded to the regions. And we will configure route 53 to create some latency-based record sets to forward the requests to our APIs.
So, in case the customer is closer to the Oregon region, route 53 will send out the API requests from the users near the Oregon region, to the Oregon region, and all the requests for the users near the San Paulo region to the San Paulo region. So by doing this we can reduce the latency in our API calls. We can also distribute the traffic between the two regions. So the next steps in our course are going to be these.
In the next lecture we are going to start the Deployment and Provisioning section. And in this section we are going to start deploying the application using the architecture that I just showed you as a reference. But, doing the deployment, we are going to talk about networking. We are going to create a new VPC for our application. We are going to talk about data management and we are going to talk about security.
So I will kind of divide the Deployment and Provisioning section using the other main domains. So we are going to start deploying things. And I will try to create shorter videos with a single topic per video so you can go back to a particular video if you have some doubts on that part.
But I will continue deploying and provisioning things, but also talking about data management. We are going to create some backups for our application, talking about security and networking. So don't worry if after the Deployment and Provisioning section you don't feel that you know this first topic, because you are going to divide all this content between the next lectures.
About the Author
Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.