1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Build AWS Serverless Web Applications With Python

Unit Test Review

The course is part of these learning paths

Serverless Python Web Development For AWS
course-steps 1 certification 1 lab-steps 1
Serverless Computing on AWS for Developers
course-steps 12 certification 1 lab-steps 8

Contents

keyboard_tab
Intro
1
Introduction
PREVIEW3m 41s
Summary
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 59m
Students1100
Ratings
5/5
star star star star star

Description

For years, web development has continued to evolve alongside programming languages, tooling, and frameworks. It started out with static web sites before moving on to dynamic sites that were rendered on the server. Over time, as JavaScript frameworks gained functionality and popularity, there was a shift towards putting more of the logic into the front-end, and using the back-end as a supporting API.

Throughout all the changes in web development over the years, the server has been a constant. Regardless of the languages, tools, and frameworks used, there’s always a server running the code. And that’s something that hasn’t changed. What has changed is that cloud providers now make it easy for software engineers to focus on writing their code, without having to focus on the underlying server.

In this course you'll build a serverless web application using Python 3.6. You'll use Lambda, API Gateway, S3, DynamoDB, and Cognito to create a multi-user to-do list application based on Vue.js.

 

Learning Objectives

  • Outline the architecture of a serverless web application
  • Set up the AWS services required for the app
  • Create and deploy an API using Python 3.6
  • Explain the value of creating unit tests
  • Use a Cognito User Pool within your app

Intended Audience

  • Developers
  • DevOps Engineers
  • Site Reliability Engineers

Prerequisites

  • Familiar with AWS
  • Development experience
  • Familiar with the CLI

Resources

Transcript

Welcome back, in this lesson we're going to review the Python unit tests. So let's dive right in. You should be creating tests for your code, and the reason is it's really easy to mess up a line of code. You could have a typo. You could make a API call with the wrong parameters. There's a lot that can go wrong.

Testing your code will verify that your code works the way you expect it to. Tests will allow somebody else to pick up your code base when you've moved on, and that allows them to understand what you intended. If they make a change and run your test suite and it shows all green, then they know everything is working as expected.

It's partly for you because six months from now you're probably not going to remember how all of this works, but it's also for the people that come after you to help them to better understand what you intended. So let's look at this. When it comes to creating service applications, you don't have the benefit of having everything local.

So if you're using a DynamoDB database, you don't have that local. Now there are local versions that you can run that are for development purposes, and you'll probably need to when it comes to more feature-rich testing, but that's one of the tougher points of dealing development that has all these remote services is that you're going to have to mock up a lot.

That's where moto comes in. Remember I mentioned before, moto is a mock version of boto. What it will do is allow you to say, in this case, I wanna mock DynamoDB. Anytime somebody makes a call to DynamoDB with boto, I want you to intercept it and I don't want you to actually do anything. Here we're testing our create function.

Let's lock this open. Let's also lock this open. As I mentioned before, breaking these out will allow you to test them independently. We're testing our create function. First we're getting our client, that's our boto client. We're getting our table and it's coming from this init function. If we look into that, you can see it's pulling from this dbconfig.

It's in our tests. What it's doing is we're setting up our DynamoDB client, so boto3. resource dynamodb. It doesn't really matter what region as long as it's a legitimate region. We're setting up our table name. We're pulling it from an environment variable, or we're just calling it todo test. Again it doesn't really matter because it'll be mocked out.

Then we kick off the same code that we would use if we were actually creating this ourselves. Dynamodb. create table, the table name. This is our key schema. This is just for the key. This might be a little confusing if you're used to defining your full schema for like a SQL database table where you'd have all of your columns defined.

This is just your key, so you don't need to worry about any other properties. Those can be dynamic. You can set whatever you want in code. This is our user ID, and our sort key of todo ID. We're going to give them a definition. We're saying that the user ID is a string. That's says S type here. That todo ID is also a string and our read capacity units and write are at both 10.

Then down here we're saying table. meta. client, so this is just a way to get the client. We're doing a get waiter. This is a functionality in boto that allows you to kind of wait for something to happen that's asynchronous. So we're saying let's wait for the table to actually exist, and the table's name is table name.

This allows you, if you're actually using this not being wrapped in moto, to ensure that some asynchronous call wraps up before you move on. In this case, we're just doing this to verify that this call actually, successfully happened even though it's kind of mocked out. Then we're asserting that the table, that table status is active.

So we wanna make sure that this mock version of Dynamo is behaving just the way a real table would and it's an active status. This is going to return a tuple of DynamoDB and the table. This allows us to have our client and our table. Going back to the code here, you can see we get our client. We get our table.

We create an item. We have a few properties that we're going to expect. The front end is going to pass on an object that should have an item property and a completed property. The item is the todo thing that we need to do. It's our todo item. And completed is obviously whether we have successfully completed the task or not.

The reason fake is here as a property is that this needs to actually prove that the whitelist functionality works, so by adding a fake property, you can verify that. So we kick off our create function, we pass in a client. We pass in a user ID. This can just be some hard-coded sting because we really don't care what it is.

It just needs to match after the fact. We pass in our item. We pass in our table name, and then our whitelist. These are the properties that a user can set. We have our item and we have our completed. Now if we added fake, they should be allowed to set fake as well but that's not the case. We wanna verify the results.

Make sure that they're not null. Make sure that the user ID was added for us because looking at the create function, this should add the user ID, should add the todo item ID. It should add a created date. Make sure that it's added. Make sure the length is what we expect. This is the sanity check to make sure that it's globally unique, that there's enough complexity here that we're not going to see duplicate keys.

We wanna verify that the item matches what's in this item array up here, so what we set should be what we get back. And we're verifying that the results are indeed completed because it's set to true up here. Then this final assertion is just making sure that the fake property is not in the results. We don't want it there.

It doesn't belong, so we wanna make sure that if it's not in the whitelist it doesn't get added. You can see here we also set a mock for DynamoDB on this function, so any calls, again, through boto to DyanmoDB are faked. We get our client, we get our table. Then we wanna make sure that we raise an error, and that error is if there's an item that gets passed in, an object that gets passed in from the user interface, that doesn't actually have a todo item.

So they can't create something with no item. That comes from this. If item is not in data, raise an exception. So this just verifies that that error will get thrown when we think it gets thrown. Our final test, a little bit more verbose than the other ones, but this is testing the handler itself. So we need to pass in an event.

We have our body which we set to a string which is a string that represents this JSON here. It's the same item as above. It's the item completed in fake, but now we just have to make sure that it gets passed through the handler the way the handler expects it. If you look back here, remember the handler is going to deserialize this into a dictionary.

So it's going to take that JSON string, turn it into a dictionary from the event body. We set our body. We set our request context, authorizer, claims, cognito user to one. If you look back here, this comes from our parse user name from claims. Remember it's looking in the event in the request context authorizer claims cognito user name.

This is the event that API Gateway is gonna hand off to our lambda function, it's gonna pass it in here. So we just need to make sure that it looks the way that it will when it comes from API Gateway. Now when it comes from API Gateway, there will be a lot more data in this event, but this is the data our function needs to actually be successful.

We call our handler. We pass in the event. Then we're just passing in an empty dictionary for our context. This function isn't using the context anywhere, so we don't have to worry about mocking that out. Grab the results. We assert that the status code should be in that results set, and that it should be set to 200.

We assert that the body should be in the results. This all comes from our helper. Remember it's going to set the status code. It's going to set the body and our headers, these are optional. Then we take that body, we deserialize it so it's going from a JSON object into a Python dictionary. We're going to make sure that the user ID is set to one, 'cause that's what got passed in through the request context.

We're gonna make sure that the todo ID got set and that it's the right length. We'll make sure that the item exists and that it's the exact same item in our posted item from our event. We'll make sure that it's complete, and we'll make sure that fake is not in the body. So what we're doing is verifying that not only did the whitelist work as we saw up here, but we're verifying that we made sure to set it correctly in the handler.

A bit of extra work to write out all these tests, that's for sure, but they're going to save you a lot of time and headache. I'm not going to go through all of them. I'll just quickly kind of skim through. You can see they're very similar. I recommend that you take the time to look at them, understand them and maybe even expand upon them so that you really understand how they're working.

But basically they're all the same thing. Here you can see get all. We wanna make sure that we're getting all of the records, so you have three records here. They're being created. This is for user ID of one, user ID of one, user ID of two. So now if we call get all with one, we expect two items, 'cause this is the two items created for user one.

So this is the testing at a high level. All right, let's wrap up here and summarize the key points. Tests are important and sometimes with all of those remote services that we'll need to integrate, it's not easy to create unit tests, because you need to create mock versions of all of these different services.

Regardless of the programming languages that you use, there are going to be a lot of different options for unit testing in mock frameworks though, so we have that going for us. In the next lesson, let's go through the vagrant file so that you're familiar with the development environment and the dev tools before we actually spool up the VM.

 

About the Author

Students35447
Courses29
Learning paths14

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.