image
The Road to Serverless
The Road to Serverless
Difficulty
Intermediate
Duration
1h 59m
Students
3753
Ratings
4.8/5
starstarstarstarstar-half
Description

For years, web development has continued to evolve alongside programming languages, tooling, and frameworks. It started out with static web sites before moving on to dynamic sites that were rendered on the server. Over time, as JavaScript frameworks gained functionality and popularity, there was a shift towards putting more of the logic into the front end, and using the back-end as a supporting API.

Throughout all the changes in web development over the years, the server has been a constant. Regardless of the languages, tools, and frameworks used, there’s always a server running the code. And that’s something that hasn’t changed. What has changed is that cloud providers now make it easy for software engineers to focus on writing their code, without having to focus on the underlying server.

In this course, you'll build a serverless web application using Python 3.6. You'll use Lambda, API Gateway, S3, DynamoDB, and Cognito to create a multi-user to-do list application based on Vue.js.

Note: The Apple M1 chip isn't compatible with this course currently. We recommend using a different device.

Learning Objectives

  • Outline the architecture of a serverless web application
  • Set up the AWS services required for the app
  • Create and deploy an API using Python 3.6
  • Explain the value of creating unit tests
  • Use a Cognito User Pool within your app

Intended Audience

  • Developers
  • DevOps Engineers
  • Site Reliability Engineers

Prerequisites

  • Familiar with AWS
  • Development experience
  • Familiar with the CLI

Resources

Transcript

Welcome back. In this lesson, let's talk about the road to serverless. According to AWS, there are five common serverless patterns, and we're going to focus on the web apps pattern. I love this slide because, to me, this really summarizes a lot of the arguments, debates, and philosophical opinions of serverless as a term.

The gist of it here is this person is angry because they have just discovered that serverless doesn't mean there aren't servers. We all know that there are servers running under the hood. It's a requirement for your code to run. You need it to run somewhere. It's a tragic marketing name and nothing more.

All it means is that we don't have to think about servers. It's an abstraction, so that we can focus on the code and not patching and managing servers. So here's kind of the road to serverless as I see it. We have our full frameworks on the left-hand side, things like Django and Rails that we've used for a very long time.

As we started getting into microservices, we started using a lot of lightweight frameworks. These were frameworks that allowed us to really rapidly prototype without all of the additional overhead that these larger, opinionated frameworks carried. And then on the final step to the road to serverless is functions as a service, it's the serverless offerings by the different cloud providers.

So Monolithic Web Application Frameworks. We've used them before, we know how they work, they're familiar to us, they have just about everything you need: authentication, authorization, routing, templates, ORM, etc. Let's look at the pros and cons. The pros, it's a familiar tool set. Once you've mastered these frameworks, you are very productive with these tools.

They're usually well documented, that's a requirement. If you're going to use some sort of framework, you need to be able to find the documentation. Somebody else is going to come along after you, they're going to support this, having documentation, so they understand how the thing should really work is important.

Feature rich. The Monolithic framework has everything. So, when it comes time to build your application, you only need to use the one tool. I mean, that's a generalization, it's a bit of an exaggeration, but you get the point. Rapid application development. Because all of that functionality exists in one place, we can develop really quickly.

And opinionated frameworks work really well for large teams. So if you have five or five hundred people, as long as they are following the rules of that framework, they will all be productive, and the code will roughly all look the same. So anyone should be able to come along and start working on that project because it's a standard.

So some of the cons. Framework lock-in. If you're using something like Rails or Django and you have a very large application, it's not likely that when something else comes along, you're going to just port that over to that new thing. That's going to take time, it's going to take money. 3rd party integration may be lacking.

Sometimes you get locked into the way that some frameworks do things and integrating with 3rd party services becomes a bit clunky. That's a con, but it's not necessarily true of all frameworks. Scaling horizontally takes some effort. It's not a problem that we don't know how to solve, these are solved problems, scaling horizontally.

But it does take effort. It's not exceptionally difficult, but it's not easy, so it's in that middle area. The unit of deployment is often the entire app. So whenever you deploy it out to a new server, usually, unless we're talking about maybe like a PHP application where you could send one file or so, usually this means the entire app is getting bundled up and shipped off.

So one of the first questions asked about new technologies, and we've probably all asked this question in the past about something. How do I make my existing app work with insert buzzword here? So we've seen this with lift and shift, which is what we now refer to as just taking your existing infrastructure and moving it to the cloud without really becoming cloud-native.

And we do that because we want to gain some of the benefits of the cloud without having to re-engineer everything from day one. Because again, that's expensive. So the logical question becomes, "How do I make my existing app serverless? " And the answer might be, "I'll make a shim or a wrapper. " There are shims out there, people have created these.

Here's an example, this is, by no means, the only example. There are examples for Go and other languages that aren't native runtimes for different platforms. Here's one for AWS Lambda to run Go. And the way it works is you would upload a Go binary in addition to whatever language kicks this off, Node, or Python.

Here's another one from kelseyhightower. It's for using Go with Google Cloud functions. So. . . the point is, re-engineering something may take a lot of work, but sometimes it's easier to just kind of shim something in to these new technologies and for a lot of yous cases, that's valuable. Here's a wrapper for Python Web Services, or Python applications using anything that's a WSGI-based Python application.

What this does, is it's a wrapper around your WSGI application to make this serverless. And so you can run Flask, you can run Django and stuff like that. So what are the pros and cons of taking a Monolithic framework and making serverless shim or wrapper? Again we get to use our existing frameworks that we've become really proficient with.

So if you have one of these already, this may be a low barrier to entry to getting it into the cloud serverlessly. Deployments may become easier. Depending on your deployment mechanism, this might be a little bit simpler because you're going to be able to just package everything up into a zip file and deploy it off to a Lambda function or cloud function, or some other serverless offering.

Now there are two properties that I listed inherited from serverless code. There are more, but these are two common ones. And that is no server management, and lower cost. And lower cost comes in the form of not having your servers running all the time. The cons. This is not the intent of serverless. This can work and does work in many occasions, but it's not native serverless so it's something to consider.

And because it's not native serverless it may rely on hacks to work. Now there might be possibly slower cold starts. You're going to experience cold starts regardless and it's kind of a bummer, but you might experience slower cold starts if you have this giant monolith that has to kind of initialize any time the container that's running your serverless code starts up.

Unit of deployment is the entire app. Unit of scale is the entire app. This is not much different than Monolithic framework in general, but it's worth noting that if you are going to run this in parallel because you have a lot of requests coming in to a serverless environment, if a cold start for the one serverless container is bad, then when you have 30 or 40 that are all starting up, users are gonna kind of feel that.

They're going to see a bit of what looks like degraded response time. So it's something to consider. So if, if you already have, I should say, there's nothing wrong with using shims and wrappers. It's a viable way to go from where you are now, into a serverless environment. It might not be your endpoint or your desired endpoint, but it might be a really good option.

But what if you're already using a serverless micro framework, what do you do with that? So a micro-framework for microservices might look like this. Here's a to-do list service. All of your endpoints are kind of in that same service, they're all encapsulated. And so, you see here that there are four HTTP verbs for your cred functions.

So there're some pros and cons to taking these existing micro frameworks and making them serverless. Again, you get to use existing frameworks that we've been using for years and are proficient with. URL routing is really easy if you've used something like Express, then you know you can really easily kind of mock a route, that minds to a function.

There are a lot of plugins. It's an easy way to port existing REST APIs. Let's say you're, you're using some, like, Sinatra, you're using Flask, you're using Express. Having something like that in a serverless environment is pretty lightweight. It's a really easy way to just to grab some sort of wrapper and if it's not natively supported, and run that as a serverless option.

Simplification of deployments. A lot of times we're using containers for micro services. If you're using a docker or even just a cgroups and namespaces, something like that, you have to manage all of that, right? You have to think about that if you're using containers, and there's an orchestration tier on top of that.

You have a whole 'nother layer of abstraction that you have to manage, it's more for the ops team to have to deal with. So taking that out of the container and shifting it to serverless may make a lot of sense and it's pretty valuable. Again, we inherit no server management and lower cost. The cons, again we're not yet native serverless for the way most platforms are considering native serverless.

The unit of deployment is the entire service. The unit of scale is the entire service. And possibly slower cold starts, though probably not slower than some giant monolith. So I've mentioned native serverless a few times, but I haven't really talked about it. So what exactly is native serverless? Native serverless is each function doing one thing.

So you can see here, this is that same to-do list service, only it's broken down so this GET request lives in its own code file; it's one function that does one thing. The POST, the PUT, the DELETE, etc. So, each function does one thing. This may sound familiar to some of you because this is basically the Unix philosophy.

The Unix philosophy is that we should write programs that do one thing and do it well. We should write programs that work together. We should write programs to handle text streams, because that's a universal interface. So if we borrow from this, we could have ourselves a pretty useful serverless philosophy.

And we can say that it is, "Write functions that do one thing and do it well. " So there's no change there. Write functions that work together. Again, no change. Write functions to handle JSON, because that is a universal interface. And there's a bit of a change. So we want functions that are as minimalist as possible.

The fewer lines of code, the fewer lines of code that you have to debug later, or that somebody else has to manage down the road. Do one thing, and do it well. Execute quickly. Your functions should be as fast as possible. And they should be idempotent. That last one basically means that there should be no unintended side effects if you run it multiple times.

So you can run this once or a billion times, and the effect will be the same. Once the change you want to have made is implemented, it shouldn't do anything after that. So let's look at the pros and cons of native serverless. Pros, we have minimalist code. We have faster cold starts. Everything becomes an API.

And what I mean by that is we get to start thinking of our application in terms of an API. And there's a lot of value in that because we shift a lot of the burden to the front end. The front end needs to call a service to authenticate. So maybe it's Auth0. So it calls that, it gets its token, it hands it off to a Lambda function to validate that that user is who they say they are and they can do what they say they claim they can do.

And then, we have API Gateway. This is specific to AWS but the general principles apply. We have API Gateway that can then cache that, that user's credentials so they get rights to execute certain Lambda functions and all of the sudden we have fewer lines of code in every function. We don't have to authenticate every single user in every function and verify they are who they say they are.

We don't have to think about session tokens in our app. So we get a lot of power by just kind of shifting a lot of the functionality into the front-end, and thinking of the backend as an API and leveraging existing API, things like Stripe, things like Auth0 to do the stuff that our API can't do or shouldn't have to think about.

No server management, lower cost. Again, inherited from serverless. Cons: as soon as we take out the server, we require a higher platform mastery. We know how to work with servers, we've been using servers since we started using computers. This is a known thing for us. Once we take these out, we have a bunch of different services that interact together if you need to have your Lambda code interact with a database, if you have your Lambda code being proxied through API Gateway.

You have to understand these services and how they play together, how they're secured, and it requires a higher platform mastery. That's a nontrivial thing. Smaller talent pool. We as engineers, as developers, we know how to write code to do whatever it is we need to do. But using these services and mastering these services is kind of a separate layer on top of that development.

So the talent pool out there is smaller, so if you have an entire application that's serverless, that's great, but you might have a harder time finding people to really work on that at the level you need. And then, a potential con is platform lock-in. If you are going full AWS native, full Azure native, full Google Cloud native, whatever it might be, you're basically locked into that platform.

That may or may not be a con to each individual. You know, small start-ups, that might not be a big deal. Large companies, it might be. So when you think about building native serverless applications, you'll be working with a lot of different services and APIs, and using the front-end as a facilitator.

If you need to authenticate, you can include an authentication service. If you need file storage, you can include a file storage service. So when the UI becomes a facilitator, for these different services that can call them directly, you get to remove a lot of the code that's in the backend. And that allows you to focus on your business domain without investing development effort into solved problems such as authentication.

Okay, let's wrap up here. We've a seen a glimpse of the pros and cons of different approaches, so keep those in mind as you go through the rest of this course. Alright, in the next lesson, we're going to look at what we're going to build in this course at a high level. So, if you're ready to keep going, then I'll see you in the next lesson.

 

About the Author
Students
99634
Labs
33
Courses
45
Learning Paths
54

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.