CloudAcademy
  1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. .Net Microservices - Refactor and Design - Course One

Nginx Reverse Proxies

The course is part of this learning path

Refactoring a Monolithic .Net Application to use Cloud Services
course-steps 4 certification 2 lab-steps 6
play-arrow
Start course
Overview
DifficultyAdvanced
Duration2h 13m
Students138

Description

Introduction
In this advanced course we take a legacy monolithic .Net application and re-architect it to use a combination of cloud services to increase, scalability, performance and manageability. 
 
Learning Objectives
This course will enable you to:
  • Understand the principles and patterns associated with microservices
  • Understand the principles and patterns associated with Restful APIs
  • Understand important requirements to consider when migrating a monolithic application into a microservices architecture
  • Understand the benefits of using microservices and associated software patterns and tools to build microservice based applications at speed and scale
  • Understand tradeoffs between different architectural approaches
  • Become familiar and comfortable with modern open source technologies such as Dotnet Core, Docker, Docker Compose, Linux, Terraform, Swagger, React
  • Become familiar with Docker and Container orchestration runtimes to host and run containers, such as Docker Compose, Amazon ECS using Fargate, and Amazon EKS

Prerequisites

  • A basic understanding of software development
  • A basic understanding of the software development life cycle
  • A basic understanding of Devops and CICD practices
  • Familiarity with Dotnet and C#
  • Familiarity with AWS
Intended audience
  • Software Developers and Architects
  • DevOps Practitioners interested in CICD implementation
  • Anyone interested in understanding and adopting Microservices and Restful APIs within their own organisation
  • Anyone interested in modernising an existing application
  • Anyone interested in Docker, and Containers in general
  • Anyone interested in container orchestration runtimes such as Kubernetes

Transcript

- Welcome back, in this lecture will look to improve on our existing container design. If you've been following along, you'll have realized that our existing Docker containers are currently using the inbuilt.net core framework Kestrel web server for serving up web assets. we're going to improve on our design by introducing engine x as a reverse proxy by implementing engine x will improve the performance capabilities of our containers engine x as a production grade, open source, high performance HTTP server. Using engine x is a reverse proxy in each of our containers will allow us to scale our microservices application for increased load and performance as well as availability. Let's now take a closer look at how will improve our internal container design. As shown here, our current container design is using the.net core framework and Kestrel web server for saving assets. The anb-Kestrel web server as a great solution when you're in development mode, but it's not really production grade will improve on this design. 

When we update our containers by implementing engine x as a reverse proxy inbound HTTP requests will first arrived at our engine x reverse proxy engine x will then pass these calls downstream to the Kestrel web server. We will configure the engine x reverse proxy to listen on port 80. But as a matter of demonstration will also show how we can setup engine x to listen on 443 with a self signed certificate allowing us to do HTTPS. we will update the Kestrel web server in each of our containers to save on port 8000 and finally in our container redesign we will use supervisor Day is a process to manage spinning up the engine x reverse proxy process in the keister web server process. so finally with container redesigned the full picture will look like this each container will have an engine x reviews proxy saving on port 80 or 443. HTTP calls will arrive at the engine x proxy and then be passed down stream to the internal keister web server, running and saving on port 8000. 

Before we jump into Visual Studio let's do a current directory listing of our root folder so we'll add a new folder and will call us build scripts and this will contain common development material to each of the three services and our presentation layer service so now jump back into Visual Studio and we'll add an equivalent logical folder called build scripts. and this will contain a catalog each of the files within our build Scripts folder the file system. next will create a new file which will represent our new Docker file which will contain the instructions for our engine x installation and then from this will build and it will become our base image for which our remaining containers will inherit from. clicking New and we'll save that save this and ensure it goes into the build scripts and rename it to be just Docker file size and then finally and build scripts will edit and add files Docker file okay next we'll begin our refactoring so we will select one of the service Docker files and from here, rather than these ones building directly from the a ASP new core runtime image will extract it and put it into our build scripts Docker file. 

next we post on the commands that will do two things firstly will install the engine x web server and secondly will install supervisor day to activate Process Manager supervisor day will do two things it will start up our engine x web server and it will start up the Kestrel internal web server which host cell application code. okay wIll now save this file and then jump over into our terminal and we'll do a Docker build so before we do the build let's do a directory listing, and will carry out the Docker file and a we have our instructions for our base build which will install engine x and supervisor day so we call Docker build or tag it to be Jeremy cook div marker see the space latest and parent directory for the build execute. 

So this kicks off the build for our new base image. And if all goes well, we'll have engine x installed as a reverse proxy server and supervisor day to manage our processes. So this will take a while and will speed it up and we'll see what we get at the end. Okay, excellent. That's completed and now if we do a Docker images, we should see a new image built locally and do we do so that's a great result. We've now got our new micro service space Docker image will now jump back over into Visual Studio and we can update our individual Docker files. And then here from our new base image. The next thing we'll do is for engineers to work, we need to instruct it how to run by adding an engine x.com file. So we'll need to add that into our project directory for each of our services, so I will create a new file, and we'll call it engine x.conf, so we paste in some configuration. And here what we're doing is we're instructing the engine x to listen on port 80, and it will forward or reverse proxy calls down to the Kestrel also running on local host. But this time on port 8000, which means we need to also update, our ASP. NET Core underscore URLs to listen on port 8000. 

Next, we need to update the way the container stats. So whereas previously the Kestrel web server started up as the main process will change this now to use supervisor day. So the entry point for our container will be slash user slash burn and the supervisor Day executable. And that will read from a conf file supervisor day.conf which we store in this location. Now we need to copy this and to our build. So we need to add this to our project. So editing again, I new file, called supervisor day.conf and then we paste in the supervisor day instructions. So what we're saying here is that we're going to start up two processes one is the.net process pointing it out application source.DLL and then the other is the engine x reverse proxy process. I will save that, go back to our Docker file and save this. So at this stage, we have completely refactored our container design and we need to do a rebuilt. So let's now jump into our terminal. Now before we do a build there is a couple of things we need to do. So if we navigate into the Services account service directory, and we do a directory listing, you can see here we've got our engine x.com file in our supervisor data. com file. 

Now one thing I've noticed using Visual Studio Community Edition on Mac is that these files sometimes gets saved in a format that's incomplete with the Linux operating system. So what we want to do is run DOX to Unix to convert both of these files. and again for supervisor day. Excellent. Okay, so now that we're in the same directory as our Docker file, we can actually do a tea-spoon so I will do that by running Docker build will give it a taste tag, and we're building in the current directory. Okay, that's building and that's completed. So it was pretty quick. Next, let's have a look at the images and we can see that our test image is being created. So before I roll out all the changes, I like to run up the container to see how it starts out. So I can do so by doing Docker, dash dash REM to remove the Docker container after this completes dash IT to show the terminal. and we can start up based on the tag test latest. and here we can see the output of this container starting up. 

so here we can see that supervisor day has indeed started up and that from this, it's forked off and started both our .net Kestrel web server and our, engine x web server so it looks really good. Lets now start the test images. As a proper container will do a Docker run dish pay or by import 80 to Port 80 that is listening on the engine x web server and we'll give it a name test and we'll use the test latest image okay that has started and then the final thing we can do to test that this is a working is we can do a cool-I to HTTP localhost, switch to our API consumers 85 endpoint. And if all works well, which it has we can say that were indeed heading out engine x reverse proxy running within our container. And we got a go to http 200. And the request has been forwarded downstream to the Kestrel web server. 

And we've got the expected response coming all the way back to our core - code. So that as a great result, and shows that our engine x reverse proxy configuration is working perfectly. Okay. The last thing will demonstrate in this lecture is how we set up EESSL on the engine x reverse proxy. So first thing we need is a self signed certificate. So we so by using the open ESSL command on my local mic here, I'm creating two files. I'm creating a self signed certificate and a private key, and we're sitting it to be valid for 365 days. And the common name is local host. So run that command. And if we do a directory listing, we should have two new files. So we've got a certificate self signed in our private key. Okay, back within Visual Studio we need to update the engine x.com file to listen on 443. So we'll do so by adding in listen 443 ESSL. So we'll leave port 80 on as well, but will introduce EESSL on port 443 and then we need to an ESSL conflict where I will point us to the self signed certificate and the private key. Okay, we'll save that and then we'll jump back into the Docker file. And here we need to copy in the self signed certificate and private key into the Docker image and rebuilt. 

So we'll save that back within our terminal will build again, and this will rebuild our image and this time with engine x with ESSL enabled. Okay, that's completed. So at this stage, we can now run a Docker container on port 80. And we can also run another one using the same image on 443. Okay, for you ever look at our running containers. We have two Docker containers up and running one listens on 443 and the other listens on port 80. Okay, let's try out we'll do kill - I will try port 80 first local host API consumers. Number 5 And as expected, we get a result back coming from the engine x reverse proxy and now if we repeat the same command. But this time we introduce dish K to ignore. He ESSL how warnings and HTTPS we've also got the same result. 

But this time we've done it over HTTPS which proves that our EESSL configuration is been successfully added. Okay, having completed the refactoring for our first container and proven that they work we can roll Let's change out to all of the containers so by the end of this will have every single container running with a engine x reverse proxy. Okay, so in the background have gone ahead and refactored the remaining Docker files so again here we are inheriting from our new base like what likewise and then the final one so from a Docker flow point of view everything's being reflected so the one last thing we should do is update our Docker compose file so will update this so on the outside will listen on port 8081 and on the inside port 80 likewise 8082 on the outside put it on the inside 8083 on the outside port 80 on the inside and then finally Port 8000 on the outside 80 on the inside. 

but then we need to update our environment variables because these Now listen on port 80 on the inside okay with that one place and the Docker compose file now being saved jump back in the terminal will check out running Docker processes and it's empty now we'll do Docker compose build to ensure that our Docker containers have been rebuilt okay that's completed and then finally we can do Docker compose up and this will bring up our complete solution without engine x reverse proxy running within each container okay so it looks like it's up here you can see that engine x is indeed been started in each container and now if we browse to localhost 8000 should be able to see how store user interface and we have. So we've completed our refactoring and everything is working as expected. That now completes his lecture go ahead and close it and we'll see you shortly in the next one.

About the Author

Students6531
Labs13
Courses44
Learning paths9

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.