image
Setting our Usage Plan, API Keys and Throttling

Contents

Creating a REST endpoint with API Gateway
1
Introduction
PREVIEW32s
Start course
Difficulty
Beginner
Duration
23m
Students
9329
Ratings
4.1/5
Description

The first steps in creating a serverless function are generally creating a REST interface to allow clients to interact with your backend services. In this course, we step through creating an API using the AWS API Gateway service. 

Transcript

- Another thing we need to do before deploying our APIs is set a usage plan. Now, usage plans are a way of throttling APIs and ensuring they have all the right credentials set. And one of the key components is in setting an API key for each deployment stage. Now this API key isn't used to control access to your API. You will recall that is done by an IAM role. Now, the IAM key protects the transport of any requests to your API. So let's create a Get Items Key, and we have a couple of options here. We can auto generate it, or have a custom key generated outside of AWS. Once we have a key enabled, we select a usage plan for which it will be applied and we choose the usage plan that we've defined, and Save. Now we can see under a method request for item/variable ID, we can decide whether or not authentication will be set for that and if a key is required. So we set that value to true, and we can do that for any subsequent APIs. Alright, so once we've done that we can deploy our API again. We set the stage and choose deploy. And then we can test our API using the Invoke URL, provided we give it the right path, which is items. We will return a list of the items correctly. So these stage editor parameters allow us to determine any throttling requirements we may have. So throttling can be per API and if that is enabled, then we can limit how many requests are made, which, again, it's not a requirement, but does enable us to limit access to it. Now, our other possible function is our cross-origin resource sharing, or CORS. Now this allows browsers to make HTTP requests to services from different domains or origins. So we literally can say which cross-origin requests will be enabled on our API end point. We can leave ours to be Get Options, and then once we confirm the changes, we'll see we get a list of the supported methods and we can replace those with our defined methods, and the URL API goes through and does that for us. So our resource at API end point has been configured for cross-origin requests, and so, under each of our stages we can set our requirements. We may only throttle access to our development, we may allow caching to our prod stage. And when we're setting the cache capacity, this is allowing the API to cache requests and reduce our costs somewhat. We consider time-to-live variable, which we'll set to 3600 seconds. We can encrypt the cache data, which is an additional security feature and we can set authorization requests to fail the request with a 403 status code. So if an unauthorized request is made to the URL, we're not going to give away any information about why. It's just going to give us, return a 403 to the unauthorized request dump. And from our Log Level, we can set it to just log error requests, we can set CloudWatch to log every request to our API. Now, under throttling we can limit the requests per second and set a burst variable for requests. How many requests per second will we throttle this API to? So we've set our stages dev and prod to have different published variables. We've allowed caching on our production environment. We've throttled our dev environment. So in the next steps, we'll create and implement a Lambda function as a back end for the two HTTP end points, that we've just defined.

 

Lectures:

About the Author
Students
184316
Courses
72
Learning Paths
187

Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built  70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+  years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.