1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Design an Advanced Application for Azure 70-534 Certification

Scalability

Contents

keyboard_tab
Introduction
1
Course Introduction
PREVIEW1m 23s
Summary
9
Summary
2m 20s
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 14m
Students112

Description

This course is focused on the portion of the Azure 70-534 certification exam that covers designing an advanced application. You will learn how to create compute-intensive and long-running applications, select the appropriate storage option, and integrate Azure services in a solution.

Transcript

Welcome back. In this lesson, we'll talk about what it means to have a scalable system. And, we'll talk about why you need to scale.

So, what does scalability mean? Scalability is the ability for a system to grow as needed to maintain it's performance under additional workload. Well, what does that really mean? Imagine it's a beautiful day and you're spending it in the park with hundreds of like-minded people wanting to enjoy nature. An ice cream pulls up and starts serving ice cream. However, everyone rushes to get some ice cream and hundreds of people are now cued up and waiting. This is not scalable at all. One truck to serve to all of these people. People towards the back of the line may just leave and by midway through most of the good ice cream is already gone. So, you'll have a lot of disappointed people. Now imagine that for every 10 people, a new truck pulls up. With just 10 people at each truck, it's unlikely that they're gonna run out of ice cream and there's no one that's going to wait too long to have theirs. So, this is a form of scaling. We're adding more trucks to support the demand. Of course, this is a silly example. It's just not practical. However, I find that silly examples stick in my mind and help me to remember things. So, hopefully it's the same way for you.

Now, a real world example would be something like Netflix. They serve up so much data and over the world that when they release something new they may need to add new servers to the load balancer to handle the increase in traffic. And since they're leveraging the AWS cloud, they probably have it automatically scaled via something like Auto Scaling groups. There are different ways to scale. You can scale up or you can scale out. Also called vertical or horizontal scaling.

So, what does it mean to scale up? We'll go back to our earlier design where we had just one web server and a database server and we'll use that as an example. Let's say we're running a fairly small server. A single CPU with two gigabytes of RAM. If we're running a small site, then maybe that's fine. However, if our traffic starts to increase, then we could use a larger server to handle the increased load. This is a common thing to do especially for systems that will grow slowly or are running legacy software that may not scale out easily.

If you're running some internal software, something like a time tracking system with infrequent traffic then scaling up may be the best choice. Scaling up is a viable option. However, there is a cap. You'll only be able to scale up to the largest server you can find. 40 CPUs and 160 gigs of RAM is a pretty large server and pretty costly one, too. So, if you're going to scale up, you'll need to consider that at some point you'll be running on the best hardware you can find and you won't be able to grow anymore. What happens if you do hit that cap? Or you just want to ensure that it won't? That's where scaling out comes into play.

In our ice cream truck example, we were talking about scaling out. We add additional resources to handle that load. If you were running a high traffic website then maybe you'll need to scale by adding additional web or application servers to handle the load. This would allow the traffic to be distributed across more servers and it won't tax any one system. The ability to scale out doesn't just happen though. It's something that needs to be supported by the underlying technology stack. You need to have a system that doesn't have any state stored on the web or application servers. If you have a system that allows for file uploads, then you need to make sure that they're saved to a central location so that all servers can access those files.

The same goes for session state. If you're using local session state, then you need to switch to something more centralized. When thinking about scaling out, you need to know if the application and tech stack will support it. And one way to answer the question, can we scale out, is to ask what would happen if we terminate the server running our app? If the answer is something like we'll lose the user uploaded assets, then your developers need to address that. If the answer is more along the lines of we'll need to deploy a new server with the latest version of the app, then maybe you're ready. You need to make sure that if all servers need access to something, it's centrally located. Creating highly scalable systems isn't impossible without the cloud. But for most of us, the cost to create scalable systems without the cloud is just too high.

Cloud platforms offer mechanisms to scale pretty easily without having to really think about it. If you need to go from 10 to 100 servers, cloud providers won't even blink. AWS, Azure , and Google Cloud all have auto scaling functionality. This allows you to scale out based on some metrics such as CPU load and back down when the load dies down. So, if we look back at one of our previous designs, we can see that with some sort of auto scaling for the servers, we now have a system that will have a pretty high up time, and one that will handle the traffic as it comes to us. At least for the web servers. The database may or may not be able to keep up depending on several factors that we won't go into in this course but include things like the size of the server, whether or not the database is replicated to additional instances among others. So, scalability is a feature that allows us to create highly available systems because when we need additional compute resources, we can just add them. And just as important, when we no longer need those resources, we just remove them.

Alright, in our next lesson we'll be talking about different storage options. So, if you're ready to keep going then let's get started with the next lesson.

About the Author

Students36913
Courses29
Learning paths15

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.