1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. .Net Microservices - Build Deployment and Hosting - Course Two

AWS API Gateway Introduction

play-arrow
Start course
Overview
DifficultyAdvanced
Duration2h 11m
Students81

Description

Introduction
In this advanced course we take a legacy monolithic .Net application and re-architect it to use a combination of cloud services to increase, scalability, performance and manageability. 
 
Learning Objectives 
This course will enable you to:
  • Understand the principles and patterns associated with microservices
  • Understand the principles and patterns associated with Restful APIs
  • Understand important requirements to consider when migrating a monolithic application into a microservices architecture
  • Understand the benefits of using microservices and associated software patterns and tools to build microservice based applications at speed and scale
  • Understand tradeoffs between different architectural approaches
  • Become familiar and comfortable with modern open source technologies such as Dotnet Core, Docker, Docker Compose, Linux, Terraform, Swagger, React
  • Become familiar with Docker and Container orchestration runtimes to host and run containers, such as Docker Compose, Amazon ECS using Fargate, and Amazon EKS

Prerequisites

  • A basic understanding of software development
  • A basic understanding of the software development life cycle
  • A basic understanding of Devops and CICD practices
  • Familiarity with Dotnet and C#
  • Familiarity with AWS
Intended audience
  • Software Developers and Architects
  • DevOps Practitioners interested in CICD implementation
  • Anyone interested in understanding and adopting Microservices and Restful APIs within their own organisation
  • Anyone interested in modernising an existing application
  • Anyone interested in Docker, and Containers in general
  • Anyone interested in container orchestration runtimes such as Kubernetes

Transcript

Welcome back. In this lecture we're gonna take our latest architectural changes that we performed in the previous lecture and upload them back into our cloud hosted infrastructure. If you recall in the previous lecture, we converted our server-side rendering into client-side rendering for our presentation layer.

 This was done by swapping out the ASP .NET Razor templating solution for a React.js implemented front end. Now one of the features that we're leveraging in this redesign is the capability that our browser has to make direct Ajax calls to our backend microservices. So when our presentation layer loads within our browser, the Shop2018 React component now makes Ajax calls to the APIs implemented as microservice components packaged within our Doka containers. Having made all the changes locally, we recompiled, rebuilt and started up the full environment locally. 

This allowed us to test and everything worked as expected. So now we're at the stage we would like to uplift our latest changes and run them on top of our ECS cluster that we built several lectures ago. So let's remind ourselves what this ECS cluster looks like. You can see that when we launched our ECS Fargate cluster, we launched it within private subnets within our VPC. Now this presents a challenge to us. Because we have client-side Ajax calls wanting to head our microservices components, then the designers is unable to facilitate these requests given that our microservice components are deployed as tasks running in the private subnets of our VPC. The point being each of the three microservice components on the right hand side of this diagram have all been provisioned with a private IP address allocated from the subnet range and to which they've been deployed. So to address this networking challenge, what are our possibilities? Firstly we could stand up a custom reverse proxy solution in which we bind a elastic IP address to it, exposing it to the public internet and then configuring it with reverse proxy rules down to each of our three microservice components.

  Secondly we could update our existing application load balancer, introducing three new target groups, one per microservice and adding in specific custom rules to route the traffic. Or thirdly we could use AWS's API Gateway managed service and as the name suggests, this is a thoughtful purpose solution and our preferred option. So going forward what does our updated solution look like? Well with the introduction of API Gateway, our client-side Ajax calls will first be routed to the public endpoint exposed by API Gateway which in turn will leverage a VPC private link endpoint deployed within our VPC. From here calls with then be forwarded to a network load balancer which is a layer four load balancing service.

 And then finally the NLB will forward the calls to our tasks where the traffic will be processed and the response will traverse the same network path in reverse all the way back to the browser. So before we jump into AWS and make our configuration changes, let's summarize the new components. One, we're introducing API Gateway. Two, we're configuring a VPC private link. And three, the standing up network load balancers. Now currently the only way API Gateway can proxy calls to private resources is through the use of a VPC private link. And the VPC private link has a requirement to forward traffic via our NLB, hence the introduction of the NLB. Okay, that completes this lecture. Go ahead and close it and we'll see you shortly in the next one where we perform the configuration changes mentioned here.

About the Author

Students7746
Labs21
Courses52
Learning paths11

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.