Advanced API Gateway

Contents

Course Introduction
1
Introduction
PREVIEW2m 26s
Utilizing Managed Services and Serverless Architectures to Minimize Cost
Decoupled Architecture
AWS Step Functions
5
AWS Step Functions
PREVIEW9m 55s
Amazon API Gateway
7
Advanced API Gateway
PREVIEW11m 29s
Amazon Elastic Map Reduce
15
Introduction to EMR
PREVIEW1m 46s
Amazon EventBridge
23
EventBridge
PREVIEW7m 58s
Design considerations
36

The course is part of this learning path

Advanced API Gateway
Difficulty
Intermediate
Duration
4h 57m
Students
131
Ratings
4.2/5
starstarstarstarstar-border
Description

This section of the AWS Certified Solutions Architect - Professional learning path introduces common AWS solution architectures relevant to the AWS Certified Solutions Architect - Professional exam and the services that support them. These services form a core component of running resilient and performant architectures. 

Want more? Try a Lab Playground or do a Lab Challenge!

Learning Objectives

  • Learn how to utilize managed services and serverless architectures to minimize cost
  • Understand how to use AWS services to process streaming data
  • Discover AWS services that support mobile app development
  • Understand when to utilize serverless services within your AWS solutions
  • Learn which AWS services to use when building a decoupled architecture
Transcript

One of the most important things to understand about API Gateway is how it manages a request. This request life cycle allows you to check in with a request and see how it is doing along its way through API Gateway. So to begin this lecture, we are going to dive into what each phase represents and why you might want to change a request along the way.

When a client sends a request to API Gateway it travels through a series of checkpoints before even making it to a backend service. These checkpoints give you an opportunity to read through this request, to make sure that everything is as you expect. This is particularly useful for filtering out requests that don't make sense, or ones that might give your backend some trouble. 

Once this request is processed by the backend of your API, it will be sent through a few more stops along the way before it returns to the client. 

Each of these little stops is important and provides a valuable opportunity to reject, modify, update, or enhance a request along its journey.

Now I want you to look at this diagram. 

It shows the journey a request will have to make through the API when called by a client.

You can see there are four primary stages (or checkpoints as I like to call them) that allow you to do something to the request.

We have the Method Request, the Integration request, the integration response, and the method response.

The method request is the client-facing side of the API that is used to access back-end resources. Methods are accessed using HTTP verbs like GET, PUT, and DELETE. These methods are integrated with a backend service which performs a task the client wishes to use. 

During this stage, we are able to check for API Authorization, Validate the request itself, or verify API keys. This portion of the life cycle gives us the opportunity to bounce out traffic before it even has the opportunity to touch the backend. This can save a lot of money by reducing load on those back-end services.

 

The integration request is the stage right before our backend services receive the method request. Here we have the opportunity to modify the request in a number of ways that might be helpful to the backend.

This is an important step as sometimes the data the client sends, might not be in a form that the backend will accept. As an example: imagine we ran a book store API and we had a request to get all book information from our book database.   That request might naturally be presented as a GET request to our customers. However, we need to change our GET method request into a POST request to actually use the DynamoDB API.

As an aside - If we had an HTTP endpoint as our backend, instead of a service like DynamoDB, it would be much more likely that the method requests and the backend services would use the same VERB.

The integration request is also a good place to actually inform API Gateway where to send the DynamoDB SCAN. Here we can define a mapping template that says look for a table named books and send the scan thataway.

After the integration request has passed the information to the backend, which in our case is DynamoDBs API, the response from the backend will be returned to us. For our example, it might look something like this. 

The Integration response gives us an opportunity to modify and change the backend response before the client reads it. Let's take a look at the output from DynamoDB really quick.

Even though it is decently readable already, we can touch it up a little before it is returned back to the client. 

For our example here I would like to have it returned in a more JSON object like format. There is extra information in this response that our client doesn't need to see. 

Just like during the request integration phase we are able to set up a mapping in the Mapping Template for the integration response.

It is with this mapping that we can translate our DynamoDB output into something a little prettier. 

And there you have it.

Now our formatting looks much better and only has the relevant data being returned. 

The method response phase gives us a final moment to standardize all of our outputs and verifying that they are actually reasonable. For example, if our API chunked the payload somewhere in the previous sections, we still have an opportunity to fail out and let the client know the API had an issue.

One of the hardest parts of working with any technology is making sure that it can not be abused. It takes a ton of resources and human capital to create and operate our systems. Our goal as solutions architects is to be aware of how to mitigate these attacks within our architectures, so that our time, energy, and resources are not wasted.

 

A DDOS attack is something that everyone is pretty familiar with these days and can affect your APIs and API Gateway just the same as any other piece of forward-facing technology. The good news is that by using API Gateway, we have already limited our attack surface and only have to worry about securing the front door (our API Gateway) and not everything behind it. Obfuscating our architectures like this is a great design practice that increases security of sensitive components. 

When you create your API Gateways there are a few options for the API endpoint type. The options are the Edge Optimized Endpoint, The Regional Endpoint, and The Private Endpoint. Private endpoints deal with internal APIs, so we will ignore that.

Looking at the Edge Optimized endpoint, these provide access to your API directly through amazon CloudFront distributions. This is good because it brings your data as close to the customer as possible, however, the downside to the Edge Optimized endpoint is that the distribution is created and managed by API Gateway. This means you do not have direct control over it. Having direct control of the distribution will allow us to add another layer of security.

Our best way to deal with DDOS attacks against API Gateway is by using AWS WAF. WAF is able to be used directly with API Gateway when you use Regional Endpoints. You can then associate your regional endpoint API with your own Amazon Cloudfront distribution to push it back to the edge. By setting up our API in this fashion, we will have complete control over both the distribution as well as having WAF integrated with our API.

When you use CloudFront and WAF together with API Gateway you will need to configure a few following settings to get the best results:

First off, You will have to configure the caching behavior for your distributions to forward all headers to the API Gateway regional endpoint. This allows CloudFront to treat all content as dynamic and to skip caching the content.

You will also need to protect your API Gateway against direct access. This can be done by configuring CloudFront to add a custom header onto its request back to the origin (API Gateway). You will need to create and add an API key into this header and configure it with API Gateway.

Cors, or Cross-Origin Resource sharing - is another great layer of security that we should add onto our API. API Gateway offers a simple console interface that gives you the power to limit access to your API from specific domains. This is another way to limit our surface of attack from the outside.

Cors can also help you make sure you only receive certain Method calls like GET, POST, and, DELETE. By doing this we are once again  limiting the total possible attack vectors that can be applied to your API.

In order to get CORS functional you do have to remember to implement an OPTIONS method that can respond to the OPTIONS preflight request, with at least the following response headers.

API Gateway has another fine feature that can help deal with over use of your APIs. If a DDOS attack or otherwise unwanted amount of traffic was able to make it to your backend, we have the ability to limit out and throttle those requests.

Each API at its base level shares ten thousand requests per second, per region, per account. Of that 10 thousands requests per second, API Gateway is able to handle up to 5000 requests in a single burst. This means that within 1 millisecond, API Gateway is able to handle up to 5000 requests.

If anything more than that comes through within the time frame, API Gateway will return that request with a 429 error which is the Too Many Requests response status code. This itself is a soft limit and can be raised by contacting AWS directly and asking for an increase. 

However, if you are hitting this limit, you might want to check to see if this traffic is legitimate. If it is, we can further discuss caching options to help increase the health of your API or look for a limit increase if a cache has already been integrated.

Any other traffic that comes after that initial burst spike will be dealt with normally as long as it is within the throttle limit.

This throttling can break down father if we look at it per stage. We have the option to set new default max throttle limits per stage that overrides the regional throttle. This override allows you to set a lower limit, like say 2000 requests, instead of letting a stage use up to the entire regional max. This stage level throttling can never be raised above the regional throttle level.

Finally, we also have throttling limits per route per stage that can override the default limit set earlier. This allows us to set unequal throttle limits set on a per-method basis.

For example, we could set a throttle max for our GET method to be 5000 requests max, while all of the other methods (five in this example) would only have a 1000 request limit. This could give us our total of 10k requests per second ( staying within the regional throttle) , but distributed based on the weighting we desire.

Normally when we are thinking of TLS authentication we are looking for the server to authenticate themselves. However, it is sometimes necessary and prudent to have the client do the same as well. API Gateway allows you to have certificate-based mutual transport layer security authentication to help achieve this goal. 

Mutual TLS uses x.509 certificates to help with identity authentication and is commonly used for business-to-business applications. You will also find use cases for mutual TLS within the IOT realm. For example, it would be important for an external camera that connects with our API for ML identification purposes, to actually authenticate itself as one of our devices.

To set up mutual TLS, you will need to create and upload a CA public key certificate bundle to API Gateway. You can create these keys using the strangely specific AWS Certificate Manager Private Certificate Authority service, or you could also use OpenSSL.

For more information on how to actually set everything up, please take a look at this blog post from AWS over here.

API Gateway also supports private integrations with your backend services. This integration allows you to create APIs that can be driven from amazon EC2, the amazon elastic container service, and Amazon Elastic Kubernetes service. This integration is important for customers who want to expose private resources within their VPC to the public.

In order to set up a private integration you will first need to create a VPC Link. The VPC link is built on top of another AWS service called AWS PrivateLink, which in turn allows access to AWS Services and other customer private AWS infrastructure. This access is maintained within the AWS network, and will never travel through the public internet. Totally private access such as this, is paramount for many medical and governmental workloads for example.

Your VPC link can be connected to an Application Load Balancer, a Network Load Balancer, or even the AWS Cloud Map service.

As an aside, one of the main reasons you would want to use and connect with a network load balancer over an application load balancer is that it offers the highest performance between the load balancer types. Additionally, it uses static IP addresses and can be assigned an elastic IP address - this is not possible with application load balancer.

Anywhoo, if for some reason there is no traffic sent over this VPC link for 60 days, it will become inactive. If this does happen, API Gateway will delete all of the VPC links Network interfaces. This will in turn cause all API requests that use the VPC link to fail, so keep that in mind.

About the Author
Students
50451
Courses
27
Learning Paths
24

Danny has over 20 years of IT experience as a software developer, cloud engineer, and technical trainer. After attending a conference on cloud computing in 2009, he knew he wanted to build his career around what was still a very new, emerging technology at the time — and share this transformational knowledge with others. He has spoken to IT professional audiences at local, regional, and national user groups and conferences. He has delivered in-person classroom and virtual training, interactive webinars, and authored video training courses covering many different technologies, including Amazon Web Services. He currently has six active AWS certifications, including certifications at the Professional and Specialty level.