Deployment and Provisioning
In this group of lectures we run a hands on deployment of the next iteration of the Pizza Time solution. The Pizza Time business has been a success. It needs to support more customers and wants to expand to meet a global market.
We define our new solution, then walk through a hands on deployment that extends our scalability, availability and fault tolerance.
Hi, and welcome to this lecture.
In this lecture we are going to do the S3 deployment, where we deploy our Angular app in an S3 bucket and configure it as an S3 website. Then we will create a CloudFront distribution and we will also configure Route 53.
So let's go to the AWS console and do the S3 deployment. So here at the AWS console, let's go in the S3 console. And I have already created an S3 bucket to use in this case. It's called pizza.cloud.rocks and you need to know that every time you want to create an S3 website and use a domain name on Route 53, you need to use this same name that you use for your zone and your bucket, so since I want to use the route domain, I must call this S3 bucket pizza.cloud.rocks as the name of my domain. Otherwise, I won't be able to create an alias to point to this address. So just keep that in mind. And this bucket is also empty, but before we start adding things in here, we need to configure it as an static website.
So we go in Properties, select Static Website Hosting, and you want to enable website hosting. And in here we need to define three things. We need to define an index document and an error document. In our case, it will be index.html for both the index document and the error document because we will be handling errors in our Angular app although we don't have an error page, but every time we try to look for a page that really doesn't exist, AWS will send us to the index page instead. That will work for us.
And we can also add redirection rules. This is more advanced stuff and we don't need that for the exam, so I won't talk about it.
We click on Save and now we have an S3 website working, but we don't have anything in our bucket and we also haven't configured permissions to this particular bucket, so we receive a 403 Error in here because we don't have permissions to this bucket.
If we take a look at the permissions, it says that only me, the owner of this account, have permission in this bucket. We can add an ACL in here and add more permissions, and we could also add a bucket policy, but since we will have an entire security section in this course, I will leave this permission stuff to that section. So I'll just keep things as they are right now and let's now upload some files to our bucket.
You just need to pay attention to one file, this file called main.js You can find this file under the frontend folder. You go under static, select js, and here is the file, main.js And the only thing that you need to mind in here is the API url. We are going to change our API URL.
We are going to move away from the Elastic Beanstalk application and we are going to use an application, manually configure using auto scaling groups and elastic load balancer as we configured in the last lecture, but in here I will use Route 53 entry just because Route 53 has some really cool features. One of them is the latency based routing which will send you to the nearest endpoint. So we will not configure Route 53 entirely. We left that configuration to the last part of this course, but I would like to create this entry already so we won't have to change it in the future.
So in short, if you are deploying this app, if you are following the examples of this course, you need to change this API URL, and that API URL must point either to an elastic load balancing or to an instance, but it has to point to our Pizza Time API, to our Pizza Time application.
That being said, I will open up my terminal and I will go to the frontend folder, and we can see the files in here. For me the most useful tool of the AWS CLI is the S3, because you can type aws s3 and you can type sync This command will sync the files between a local folder and an S3 bucket or an S3 bucket and a local folder, or two S3 buckets.
So in this case, I want to sync all the files in this folder, so I just put a dot in here. And I need to specify the bucket that we want to sync with this folder, which is pizza.cloud.rocks
And since we haven't created a bucket policy to our S3 bucket, we also need to set some ACL to the objects that we are going to upload, otherwise it won't be public accessible. So we can also do that using this command. We type --acl And we need to specify that it will be public-read Hit enter, and now with the rest we identify what the files are in the folder that are not in the S3 bucket and upload the files to that bucket.
If we repeat the command again, the AWS CLI won't upload anything. But for example, if we do a small change in this file, for example, let's put a comment in here, it's a change, hit Save, and we sync again. The AWS CLI will only sync the main.js file because it can identify the changes that we've made.
So back to S3, if we refresh the page in here, we will see that we have some files in this bucket, and we also can see if we go to the static website page that we will be able to see our application.
But since we haven't configured the API URL we won't be able to do API calls, so you start receiving errors when we try to login or read the orders and so on. So let's continue configuring our Angular app.
Now we need to create an CloudFront distribution to this S3 bucket, and in the final minutes of this lecture we are going to configure Route 53 to solve the problem with our app. So I'll keep this window open. And let's go to the CloudFront console.
Here on CloudFront, we need to create a new distribution, and you have two types of distribution. You can have a web distribution and RTMP distribution which would be for streaming videos. But you need to have the Adobe Flash media servers in order to use that option, so just keep that in mind.
In our case, we just want a simple web distribution, so I'll click on Get Started, and we need to define an origin domain name.
In here we could put external origins, so for example, if we had in there running on example.com we could put that in here and AWS should take that as our origin, but since we are using an S3 bucket, we need to specify the bucket that you want to use. In here we can specify a path.
So for example, in our case if the application were live inside the static folder and not the root of the S3 bucket, we could specify in here the static and that would make our application work as expected.
We can restrict the bucket access. What that would do is that will create some identity to our CloudFront distribution, and that will change our bucket policy. That will create a bucket policy that will only allow read access to connections coming from this CloudFront distribution. So that's very useful when you want to add another layer of security in your S3 buckets, and also when you want to ensure that people will access your files through CloudFront distribution.
Origin Custom Headers, this is stuff that relates more to developers so I won't really talk about that. In fact, all these other behaviors are more to developers, so I'll quickly cover them.
In here you can say if you want to have your HTTP connections redirected to HTTPS, but since we are not using HTTPS we should use the first option.
In here you can specify the methods that you want to use, the cached HTTP methods. You can forward headers, and you can change the object caching.
By default, AWS has some default Time To Lives for objects, but you could also specify a different Time To Live in here. You could change the Time To Live default and you could change the maximum and the minimum, and that would change the behavior of your distribution. So if you define a smaller time, for example, in the default than in the maximum Time To Live, that would make CloudFront go more time, that would make CloudFront access your origin more times to get things and cache them. We will stick with the origin cache headers.
This Origin Cache Headers means that you specify in your objects the Time To Live, otherwise AWS will assume the default Time To Live.
All the other stuff are more to developers, so I will quickly go under.
The Price Class. Here you specify the regions where you want to replicate our data using the CloudFront distribution. In our case, I use all edge locations, which you have the best performance but also you increase the price a little bit, but that will ensure that our app will be available and all over the world, and people will be able to access the Angular app very fast, although we won't have instances running all over the world, if people access our page, they will be able to load it very fast, so I use this option.
And we need to specify in here an alternate domain name. We will be using pizza.cloud.rocks for our distribution, so we need to specify that in here.
The SSL certificate we will stick with the CloudFront certificate, but for example, if we wanted to use a custom certificate, we would have to either import one to CloudWatch using the IAM service or we could use the certificate manager provided by AWS and create a free of charge certificate for us for our domain, and we could easily access that in here. That's not our case.
We can specify a default root object. I want to specify that. I will say that the default root object will be our index.html page, and we can also configure logging to our distribution. We could select in here saying that we want that, and we could specify a bucket to store the logs. And every time someone access our distribution, the CloudFront service would store a new entry in our logs bucket. I don't want that right now.
You can also log cookies. And I want to enable our distribution, so I leave it as enabled.
Click on Create Distribution, and that will take a lot of time because AWS has now to create the distribution and has to replicate the data of our S3 bucket all over the world, so that will take a fair amount of time, so I will stop the recording and get back once it's done.
So our CloudFront distribution was finally deployed. Let's now configure Route 53. So we go to the Route 53 console, Hosted Zones.
We select our hosted zone, and the first thing that I want to do in here is create a new record set for our API. So we will create record set. Call it api And I want to configure it as an alias, and that alias will point to the elastic load balancing that we created on the Oregon region. So we select the pizza-time-elb which lives in the Oregon region, and we will change this configuration later on in this course, but for now that's enough for us. So let's just click on Create.
Then we also want to change our main URL. We want to change that pizza.cloud.rocks to point to our CloudFront distribution instead. So we select the distribution, that will still be an alias, and we click on Save Record Set. That will take some time replicate this new information, but what we can do in the meantime is delete the Elastic Beanstalk application. That's not the best approach when migrating in DNS entry to another entry, so for example, like here we are changing our main URL. We are pointing it instead to a CloudFront distribution. That's not the best way to do this. We have better ways to do that, but I will discuss this in the last section of this course.
So let's go now on the Elastic Beanstalk console, and what I will do is simply delete our whole application so we won't worry about our Elastic Beanstalk application anymore. We have an application deployed manually using SE2, S3, and RDS. So can simply click in here and delete application, and that will first delete all the environments inside this application. In this case it will be only the High Available Pizza Time, and then that will delete the application itself.
That will take a lot of time, but we don't need to wait for that.
Eric Magalhães has a strong background as a Systems Engineer for both Windows and Linux systems and, currently, work as a DevOps Consultant for Embratel. Lazy by nature, he is passionate about automation and anything that can make his job painless, thus his interest in topics like coding, configuration management, containers, CI/CD and cloud computing went from a hobby to an obsession. Currently, he holds multiple AWS certifications and, as a DevOps Consultant, helps clients to understand and implement the DevOps culture in their environments, besides that, he play a key role in the company developing pieces of automation using tools such as Ansible, Chef, Packer, Jenkins and Docker.