1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Introduction to Amazon API Gateway

Introduction to API Gateway

Introduction to API Gateway
Overview
Difficulty
Intermediate
Duration
21m
Students
200
Ratings
4.7/5
starstarstarstarstar-half
Description

This course covers Amazon's API Gateway service, which will help you create some incredibly robust APIs for your workloads. We'll look at the different types of APIs you can create and the fundamentals of how API Gateway works.

Learning Objectives

  • Understand what types of APIs API gateway can create
  • Learn about the general differences between those API types
  • Learn how an API Gateway functions at a high level

Intended Audience

  • Solution architects
  • Developers
  • Anyone interested in creating their own business-ready APIs

Prerequisites

To get the most out of this course, you should have a decent understanding of cloud computing and cloud architectures, specifically on Amazon Web Services. It would be helpful to know about HTTP and web technologies in general, but it's not essential.

Transcript

Working with and creating APIs is a large part of most development processes. APIs are a helpful intermediary that allow you to abstract away code, services, architectures, and all the little stuff that your clients don't need to know about. There are many ways to build an API within AWS, and some of them are more involved than others. For example, you can run a Nodejs server on an EC2 instance that could act as your API for any backend services you may wish to expose. However, that does take a little bit of know-how and requires you to deal with patching, and security, and all those good things that an AWS Managed Service would normally do for you.

AWS has created a fully managed service called Amazon API Gateway, which helps deal with: building, publishing, monitoring, securing, and maintaining APIs within AWS. It works quite well at any scale, and is able to support serverless, generic web applications, and even containerized workloads on the backend. You can build your APIs for public use, for private use, or even for third-party developers. The best part about it is that it is entirely serverless, and does not require you to manage any infrastructure and you pay just for what you use. The service is also able to handle accepting and processing hundreds of thousands of concurrent requests, if things start to get out of hand, API Gateway is able to monitor all traffic and can throttle requests as desired.

The different ways you can expose your API. Now, that you've determined a need for an API, and then you wanna build one using a managed service like API Gateway. The next question is to ask, how will your customers interface with your API? What I mean with this question is, where will this API be made available? API Gateway has three different endpoint types, each with different goals allow traffic to come into the API in different ways. The first option is an Edge-Optimized API Endpoint. This option is best for when you have many geographically distinct and distributed clients. For this endpoint type, all requests will be routed to the closest CloudFront point of presence, bringing your API closer to your customer. Using this type of endpoint can help to reduce TLS connection overhead, which overall speeds up the request response. I recommend this architecture for web applications, mobile applications, and possible IoT devices that need to speak with an API.

The second option is the Regional API Endpoint. This option works well, when you are planning to use your own content delivery network AKA not CloudFront. This is also the right option, if you wanna use a non AWS CDN, or if you don't wanna use a CDN at all. In general, this endpoint works well when you are expecting all of your API calls to come from within a single AWS Region. This can happen when your API is being used by other applications or your own services that only exist within a specific region Finally, the last option is a Private API Endpoint. These endpoints can only be accessed from within your VPC through the use of an Interface VPC Endpoint. This is simply an ENI that you create within your VPC. These types of endpoints should be used for internal APIs. This includes things like microservices or internal applications that only your organization needs to be aware of. The Private API Endpoint, gives you a lot of control over the networking space to help with security of your API.

Supported Protocols. API Gateway has two supportive protocols that you can create APIs for. They are HTTP REST Endpoints and WebSocket Endpoints. The HTTP type endpoint is made of two versions. The original version is called a REST API, while the newer version is just an HTTP API. Both of these versions are used to serve HTTP Requests using HTTP methods such as Get, Post, Put, and Delete. Additionally, both of them do so in a restful manner, despite the confusing naming convention.

There are a number of differences between these two versions that I will describe momentarily. But it's important to know that the HTTP API, the newer version, is slowly gaining parity with its older counterpart. It will eventually surpass the REST API and is the current development path for API Gateway. The REST API was originally created to help deal with all aspects of the API lifecycle. It offers API management features such as Usage Plans, API Keys, and helps deal with publishing and monetizing your APIs.

The HTTP API on the other hand, was created and optimized for building an API that proxies to AWS Lambda functions or generic HTTP backends. It does not have the quality of life features that the REST API does, but that ends up making it around 70% cheaper. AWS says the following about which one you should use. HTTP APIs are ideal for: building proxy APIs for AWS Lambda or any HTTP endpoint, building modern APIs that are equipped with OIDC and OAuth 2 authorization, workloads that are likely to grow very large and APIs for latency-sensitive workloads.

REST APIs are ideal for: customers looking to pay a single price point for an all-inclusive set of features needed to build, manage, and publish their APIs. Now, the WebSocket API is completely different from the other two. This type of API, it maintains a persistent connection between your backend and the client. This is called Duplex Communication. This means that a WebSocket API is good for creating real-time communication between applications. This might include Chat Apps or streaming type services. This means you can create real-time communication applications without having to deal with managing servers. However, if you have a spotty connection or poor latency, you might not get the exact behavior you expect.

In general, WebSocket APIs, are pretty used case specific. Integrating API Gateway with your services. What can you actually put behind your API Gateway? This is probably the most important feature of any API is to know, what it can connect with on the backend. There are a whole host of things you can put an API Gateway in front of, but first let's talk about how the integration can take place. Proxy versus direct integrations.

There are two different ways you can connect with your backend, and those are through a proxy integration or through a direct integration. A proxy integration is basically just a pass through setup, where API Gateway will pass the request straight to the backend without trying to modify anything along the way. The direct integration, allows you to make changes to the request as it passes through API Gateway and to modify the response on its way back to the client.

The advantage of using a proxy integration, is that it is very easy to set up. Almost everything is handled in the backend service, and is great for rapid prototyping. The advantage of using a direct integration, is that it completely decouples API Gateway from the backend's request and response payloads, headers, and status codes. This allows you to make changes and not be locked into the backend service's responses. However, this is more work involved in getting this method of integration setup.

Integration Types. Now that we understand the difference between a proxy and direct integration, it's time to see what the service can actually plug into on the backend. Lambda functions, you can connect your APIs to a Lambda function. This is either as a proxy or as a direct integration. HTTP integrations, you can also have API Gateway point to any public facing endpoints, either inside or outside of AWS. This means you can have it sit in front of a public load balancer or an EC2 instance or even something back on-premises. As long as it has a public endpoint. The date of the client passes through, will include request headers, query strings, URL path variables, and a payload.

The backend HTTP endpoint will parse that information and determine what it should respond with. This is available either as a proxy integration or a direct integration. A Mock integration. This allows API Gateway to generate a response without the need for any backend integration whatsoever. It basically allows you to respond back with whatever you want. This can be good for testing or can allow for placeholder data to be returned back before back in service's ready to be integrated. This is available only as a direct integration.

An AWS Service Integration. This integration, allows you to respond with an AWS Service API response. When this is called, API Gateway will invoke a specific AWS Service API for you. For example, you could use it to send a message to an Amazon SQS queue or to start a Step Functions state machine. This is good when you want your users to have access to AWS Service without necessarily having full access. This one is also only available for direct integration.

VPC Link Integrations. This will allow you to access private resources within your VPC. These resources could be things like, Application Load Balancers, or container-based applications. And just like the last couple, this is also direct integration only. But with all these integrations, you can pretty much handle any type of backend service that your customers would want. Now, not all of these integration types are available for every API Gateway protocol, so let's take a moment to read over that.

For API Gateways using a REST API, they have access to all five types of integrations: Lambda Functions, HTTP, Mock, AWS Service, and VPC Link. For API Gateways using HTTP APIs they can use: Lambda Proxy Integrations, AWS Service Integrations, and HTTP Proxy Integrations. And for API Gateways using WebSocket APIs, they have access to: Lambda functions, HTTP Endpoints, or AWS Services. API Gateway Authorizers. Amazon API Gateway, has a number of authorization options available for your HTTP and REST APIs. Some of these however, are API specific and I will note those when they come up.

The first option is to use an IAM Authorizer, this route uses AWS Identity and Access Management, IAM. Using this route, it can allow you to create unique credentials that you can distribute out to your API clients. It does require a Signature Version 4 signed request from that client, as well as requiring the client to have Execute-API permissions. This option is available with the HTTP API, the REST API, and the WebSocket API. 

The second option is to use a Lambda Authorizer, you can have API Gateway execute a Lambda function that can run a custom authorization model. This means if you have a third-party auth system of some kind or some legacy system, this function can return whatever you need from there. This option is also available with the HTTP, REST and WebSocket APIs. Our third option is to use a Cognito Authorizer. This method is a direct integration to cognito user pools, which allows you to do complete user management. This means access to login pages, multi-factor auth, user info. This option is only available for the REST API. For more info about Cognito and Cognito user pools, please check out this lecture over here.

And the final option is to use a JWT Authorizer. This authorizer allows you to plug in anything that is OAuth2 compliant. You can also use any service that has an OpenID Connect, OIDC. Which is funny, because that means you can actually use Cognito through this authorizer type. I bring that up, because I mentioned Cognito Authorizers were only available for the REST APIs. Using this method, you can use a JWT Authorizer to link to Cognito. This option is only available for HTTP APIs.

API Gateway Security. There are many ways that people can try to interfere with the workings of your API and is important to keep on top of those threats. One of the most common issues that any online entity might experience is a DDOS attack. API Gateway has deep integrations with AWS WAF, the Web Application Firewall, which can greatly reduce the damage such an attack might cause. Additionally, WAF provides a whole host of other features that are extremely valuable on the security front line. These features include protection from common web exploits like SQL injection attacks and cross-site scripting attacks.

With WAF, you have the ability to block ranges of IP addresses from ever accessing your API in the first place. Heck, you can even block specific countries or regions. To learn more about WAF and some of its security related friends, please take a look at this lecture over here. API Management and Usage. When building a REST API, you have the option of setting usage plans. These are great if you are planning on offering your API as a product for your customers. These plans can be configured around a set of API Keys, which in turn let your customer have access to the associated API.

API Keys are alphanumeric strings that you can distribute to your customers that they will need in order to access your API. API Keys allow you to throttle requests if they go above a certain threshold or to set maximum quotas that your customers can use. With this information you can create billing statements for your customers, as well as make sure they stay within the bounds of your contracts. API Keys are not only good for your external customers but might even be handy for internal use as well.

If you wanna keep a tighter rein on your internal-facing API, you can hand out API keys to your developers that will make sure they dont have the opportunity to blow up your backend systems. Caching your responses. As your API becomes more popular, you might be interested in caching some of your responses to help speed up the latency of your API to your users. Caching is a very cost-effective way to greatly enhance a product that has repeatable responses.

By caching your responses, you can reduce the total number of calls made to your backend endpoints. Not only does this increase response time but is also able to reduce load on those backend services, which can be very beneficial. API Gateway has a built-in caching system that you can activate if you feel like your API needs that extra touch of speed. When enabled, API Gateway will cache responses from your backend and keep them in memory for specific time to live, TTL period. The default TTL of your API cache is 300 seconds. However, you can change this based on, the time sensitivity of your data. You have the option of setting the TTL anywhere between zero and 3,600 seconds. And don't be worried if your data gets stale quickly, even having a one-second cache can greatly improve your application if you have high enough demand.

When creating your cache, you will first need to choose a cache capacity. The larger of a cache you have, the better the performance. However, this will also increase the cost. Unfortunately, like many aspects of this service, the use of caching is limited to using the REST API version of the API Gateway. Metrics and monitoring. One of the most important ways you can determine the health of your API is to collect and monitor the metrics coming from API Gateway.

Overall, there are seven key dimensions that API Gateway tracks. First of all, we have the CacheHitCount. This is the number of requests served from the API cache in a given period. The next metric is the CacheMissCount. This is the number of requests served from the backend in a given period, when API caching is enabled. We have the normal Count, which is the total number of API requests in a given period. There's the IntegrationLatency, which is the time between when API Gateway relays a request to the backend and when it receives a response from the backend. And finally, we have just the generic Latency, which is the time between, when API gateway receives requests from the client and when it returns a response to the client.

The latency includes the integration latency and other API Gateway overhead. Keeping an eye on things like, IntegrationLatency, total API count, the total CacheHitCount can let you know when something has gone, wrong with your API. API Gateway sends these metrics to CloudWatch every minute and can provide real-time graphs, dashboards, and alarms that will let you know when something is amiss at a glance. More guidance for choosing between HTTP and REST APIs.

Now, even though we've talked about the difference between the HTTP and REST APIs, I do wanna go back and show you some more specific deciders that might be, make or break for your architectures. Be prepared for a lot of tables coming up, but we'll go through them quickly. As we've discussed before, when looking at authorization of your API, you'll not be able to use Native OpenIDConnect / OAuth 2.0, when using a REST API. However, I do think that Cognito can fill that gap a bit, if in a different way. When looking at integration potential, the REST API is not able to provide private integration with the Application Load Balancer or Cloud Map. You can get around those, but it's a little bit funky.

Here's AWS's recommendation: "For private Application Load Balancers, use API Gateways VPC link to first connect to a private Network Load Balancer. Then, use the Network Load Balancer to forward API Gateway requests to the private Application Load Balancer." So while it doesn't work out of the box, you can force it to work after a version. If that isn't your style, this might be a deal breaker.

Now, we very briefly spoke about API Management before, but this is where some people will really have to make a choice. If your API will need those API Keys and usage plans, you then, you're stuck using the REST API for now. Additionally, there is a caveat for using a custom domain name with the HTTP API, which is that TLS 1.2 is the only supported TLS version. And here we can see a number of features are API specific. For example, if your API needs the ability to modify the body of incoming requests, you need to use a REST API.

Additionally, if you'd like API caching, that is REST API locked as well. If you can live without modifying the request and the response from your backend, HTTP APIs can be a reasonable choice at quite a discount. However, if you need those services, you are once again stuck with using the REST API. On the security side of things, the REST API is fully featured and has WAF access. HTTP APIs are a bit sad however, and are lacking in some of the stronger security features. I also wanna show you the monitoring prospects.

Here again, the REST API has all the bells and whistles, while HTTP API is left again a little bit behind. However, I don't imagine these will be a huge deal breaker for most people. And last but not least, in fact, it might be the most important. Let's talk about the price difference between these two APIs. As I've discussed, you will need to give up quite a bit to use the HTP API. So let's see what that savings will be. And here you can see the costs for the REST API, the first 300 million or so API calls costs $3.50. And the price drops from there. Although you'll need to be doing billions of calls to really get that price down. And if we compare this to the HTTP API, you can see that the price is already well below even the cheapest model of the REST API. So while you do give up quite a lot, if you're able to handle that and you can get quite a bit of a savings. Although it is worth noting that, more volume with the HTTP API is only a 10% discount, but hey, we'll take that.

About the Author
Avatar
Will Meadows
Senior Content Developer
Students
7049
Courses
33

William Meadows is a passionately curious human currently living in the Bay Area in California. His career has included working with lasers, teaching teenagers how to code, and creating classes about cloud technology that are taught all over the world. His dedication to completing goals and helping others is what brings meaning to his life. In his free time, he enjoys reading Reddit, playing video games, and writing books.