1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Introduction to Amazon API Gateway

Amazon S3

Start course
Overview
DifficultyIntermediate
Duration25m
Students1237
Ratings
4.1/5

Description

API Gateway is a fully managed service by Amazon that makes it easy for developers to create, publish, maintain and monitor their APIs at any scale without having to worry about versioning, authorizations, throttling and other administrative tasks. In this course, authored by Tehreem Siddiqui and narrated by Adam Hawkins, you will learn how to create and deploy REST API through API Gateway to expose HTTP endpoints, AWS Lambda functions and other AWS services.

Transcript

Hello and welcome back to the API Gateway course. This lesson demonstrates integrating the AWS API Gateway with other AWS services. Thus we'll begin to use the API Gateway as a central place to access our own application APIs and other AWS services.

Our goal in this lesson is to create an API Gateway that proxies S3 request to create buckets, list and delete objects. Let's begin in the AWS console. The first step is to authorize the API Gateway to talk to S3. We'll craete a new IAM Role for that. Open up the IAM console and create a new role. Be sure to select the API Gateway role type. This sets up the appropriate trust policies inside AWS. Now attach the push logs to cloud watch policy. We don't need that right now but we'll use it later on. Next attach the S3 full access policy.

This is a generous policy that grants access to everything in S3. This works for now but you'll want a more precise policy in a production environment. Note the IAM roles ARN. We'll need that later on.

Now head over to the API Gateway console. We'll start by creating a new API with two resources. Folder and items. Folders map to buckets and items map to objects in that bucket. Note that the folder and item are wrapped in curly braces. This means they are parameters and not static strings. Let's create a new get request for the root resource that lists all the the buckets.

Choose the AWS service integration type, then fill in the region and select S3 for the AWS service. Also set the HTTP method to get. And set the path override to slash. Now paste in the ARN from earlier into the execution role field. Then save the request. Now it's created but it's not quite ready for prime time yet.

Let's investigate with a test. Fire a test request to the root resource. But here's the problem. Note that S3's response is an XML, but we want our API to use JSON. We can fix that by mapping the request headers to the appropriate S3 request headers. But we'll fix that one shortly.

Let's map the correct status codes first. Our API needs to communicate the correct response code. AWS sets up the 200 okay code by default. But this means everything would turn to 200 okay. So let's map the 400 and 500 response codes as well. Start by setting up the 400 and 500 response codes and then we'll be able to focus on setting the headers. Let's add the content type and content length header to the 200 okay response.

These headers are set in the S3 HTTP response. So we want to forward them along in our API. We'll also add a timestamp pattern for later use. Expand the 200 okay dropdown and fill in the header fields. Now onto the 400 and 500 responses for our API. We'll update the integration response to map anything from S3 in the 4xx or 5xx to 400 and 500 responses. This means our API will return a 400 if S3 returns anything in the 4xx range and similar behavior for the 500. We can do this with a regex matching these HTTP codes.

Time to map the headers we created earlier. Expand the 200 okay response and then expand the header mappings. This is where we can map the backend headers to our API headers. You'll see there's the placeholder value filled in. We'll map a timestamp header to integration response header date. Content type and content length map to the same headers. Also refer to the documentation for a complete list of all supported values. We're just about done with the get request.

Let's protect our API by adding IAM authorization. It doesn't make sense to expose our S3 buckets and objects to just anyone right? After that, we're ready to take it for a test drive. Now things are looking good. Do you see that all of the S3 buckets are returned in the response body. But you may need to peek into that XML a bit. Also the content type header is correctly set to XML because we mapped it earlier.

So that's it for the get request. Now onto a new put request. This put request will add a new object to a specified bucket. Start by adding a new request, select the AWS integration and select S3. Set the path override to the bucket parameter, also note the curly braces here, and again paste the ARN from earlier.

Now we need to update the integration request to generate a correct URL back to S3. Open up the integration request and expand the URL path parameters. Map the bucket path parameter to the folder parameter in the request path. Next add the content type header to the method request like we've done before. Our next step is to configure` headers in the integration request. We'll map the content type header like we've done before. We also need to set up some Amazon specific headers in this case. The x-amz-acl header is a special header for AWS services. It specifies the access control levels on S3 objects or buckets in this case. The value is set to authenticated read to automatically assign new objects this policy. The expect header is set to 100 continue and ensures that the request payload is only submitted when the request parameters are validated.

All right. The put request is ready. Let's create a bucket with the test. Fill in a valid S3 bucket name for the folder parameter and set the content type to application/xml. We'll need to write some XML on the request body to make this request. AWS requires a location constraint parameter. I'll paste in some XML to create this bucket in US West two. Fire up the test request and things should be 200 okay. Looks like they are. Now we can verify this by looking in the S3 console for our newly created bucket. All right we're almost to the end. We can repeat all the same steps to configure the other get and delete requests. We're gonna fast forward through this so hold on tight. We'll be able to test the entire API afterwards.

Now I've done a little work behind the scenes. I've added some objects to the buckets so we can test the get request. Fire off a test request to the same bucket to list all of the objects. Look at that. We've got some objects. You can see here "folder1" is highlighted meaning we found it in the bucket. Time for the last and final test. Deleting a bucket. First let's verify in the S3 console the bucket is still there. Looks like it is.

Now just a heads up though. You can only delete buckets if they're empty. Now I've emptied this one already behind the scenes. So you can fire off a test request. We'll need to paste in some XML just like we did with the put request. While pasting the XML into the request body and fire off the request. We should get a 200 back and we do Now we can verify by refreshing the S3 console and verifying the bucket is in fact not there. And as you can see the bucket has been deleted. Let's take a breath now. We've made it to the end.

This wraps up the demo of using API Gateway in combination with other AWS services. We focused on the happy path cases up till now. You know that's when everything works as expected. Unfortunately reality is not like that. Things go wrong and break in unexpected ways. We'll discuss monitoring and troubleshooting in the next lesson. See you then.

About the Author

Tehreem is a Sr. Software Engineer with passion in Cloud Technologies, Big Data analytics, Software Testing and Automation. She has over 10 years of work experience comprising of her tenure at ServiceNow, Microsoft and Harmonic Inc. Most recently she has been developing learning content in-line with the emergence of Public Clouds and XaaS platforms with focus on AWS, Microsoft Azure and GCP. Tehreem resides in BayArea, CA with her family and when not working she enjoys nature/outdoors, movies and fine dining.