Setting our Usage Plan, API Keys and Throttling

The course is part of these learning paths

Solutions Architect – Professional Certification Preparation for AWS
course-steps 45 certification 6 lab-steps 19 quiz-steps 5 description 2
Certified Developer – Associate Certification Preparation for AWS
course-steps 27 certification 5 lab-steps 22 description 2
Getting Started with Serverless Computing on AWS
course-steps 8 certification 1 lab-steps 9


Start course


The first steps in creating a serverless function are generally creating a REST interface to allow clients to interact with your backend services. In this course, we step through creating an API using the AWS API Gateway service. 


- Another thing we need to do before deploying our APIs is set a usage plan. Now, usage plans are a way of throttling APIs and ensuring they have all the right credentials set. And one of the key components is in setting an API key for each deployment stage. Now this API key isn't used to control access to your API. You will recall that is done by an IAM role. Now, the IAM key protects the transport of any requests to your API. So let's create a Get Items Key, and we have a couple of options here. We can auto generate it, or have a custom key generated outside of AWS. Once we have a key enabled, we select a usage plan for which it will be applied and we choose the usage plan that we've defined, and Save. Now we can see under a method request for item/variable ID, we can decide whether or not authentication will be set for that and if a key is required. So we set that value to true, and we can do that for any subsequent APIs. Alright, so once we've done that we can deploy our API again. We set the stage and choose deploy. And then we can test our API using the Invoke URL, provided we give it the right path, which is items. We will return a list of the items correctly. So these stage editor parameters allow us to determine any throttling requirements we may have. So throttling can be per API and if that is enabled, then we can limit how many requests are made, which, again, it's not a requirement, but does enable us to limit access to it. Now, our other possible function is our cross-origin resource sharing, or CORS. Now this allows browsers to make HTTP requests to services from different domains or origins. So we literally can say which cross-origin requests will be enabled on our API end point. We can leave ours to be Get Options, and then once we confirm the changes, we'll see we get a list of the supported methods and we can replace those with our defined methods, and the URL API goes through and does that for us. So our resource at API end point has been configured for cross-origin requests, and so, under each of our stages we can set our requirements. We may only throttle access to our development, we may allow caching to our prod stage. And when we're setting the cache capacity, this is allowing the API to cache requests and reduce our costs somewhat. We consider time-to-live variable, which we'll set to 3600 seconds. We can encrypt the cache data, which is an additional security feature and we can set authorization requests to fail the request with a 403 status code. So if an unauthorized request is made to the URL, we're not going to give away any information about why. It's just going to give us, return a 403 to the unauthorized request dump. And from our Log Level, we can set it to just log error requests, we can set CloudWatch to log every request to our API. Now, under throttling we can limit the requests per second and set a burst variable for requests. How many requests per second will we throttle this API to? So we've set our stages dev and prod to have different published variables. We've allowed caching on our production environment. We've throttled our dev environment. So in the next steps, we'll create and implement a Lambda function as a back end for the two HTTP end points, that we've just defined.

About the Author

Learning paths28

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.