Store2008 Review - Overview of the .Net Monolithic Code
Store2018 - Refactor and Redesign
The course is part of this learning path
In this advanced course, we take a legacy monolithic .Net application and re-architect it to use a combination of cloud services to increase scalability, performance, and manageability.
This course will enable you to:
- Learn the principles and patterns associated with microservices
- Understand the principles and patterns associated with Restful APIs
- Learn important requirements to consider when migrating a monolithic application into a microservices architecture
- Master the benefits of using microservices and associated software patterns and tools to build microservice-based applications at speed and scale
- Understand tradeoffs between different architectural approaches
- Become comfortable with modern open source technologies such as Dotnet Core, Docker, Docker Compose, Linux, Terraform, Swagger, React
- Become familiar with Docker and Container orchestration runtimes to host and run containers, such as Docker Compose, Amazon ECS using Fargate, and Amazon EKS
- A basic understanding of software development
- A basic understanding of the software development life cycle
- A basic understanding of DevOps and CICD practices
- Familiarity with Dotnet and C#
- Familiarity with AWS
- Software Developers and Architects
- DevOps Practitioners interested in CI/CD implementation
- Anyone interested in understanding and adopting Microservices and Restful APIs within their own organization
- Anyone interested in modernizing an existing application
- Anyone interested in Docker, and Containers in general
- Anyone interested in container orchestration runtimes such as Kubernetes
- [Instructor] Welcome back. In this lecture, we'll take our re-architected microservices application and containerize it using Docker. But for starters, let's do a quick recap of each of the components within our application. We have the Account Service web API, Inventory Service web API, and Shopping Service web API running on ports 5001 through to 5003. At the front-end, we have the Store2018 component, which implements the presentation layer, and is designed to make HTTP calls to each of the web APIs. So by containerizing this application, we'll end up with four Docker containers, three for the service components and one for the presentation layer component. By the end of this lecture, we'll have our microservices architecture containerized and running on our local workstation. So let's get stuck in and begin to build our Docker containers. Visual Studio comes with an excellent feature, which is native Docker support.
So let's use it on each of our projects. We'll right-click, Add, and select the Add Docker Support. What this does is it generates a Dockerfile in each of our projects. So on Inventory, we'll now repeat the same process. And on the Shopping Service, we'll also add the same. And then finally, on Store2018, repeat, Add Docker Support. Now, the other thing this does is it generates a new project called Docker Compose. Docker Compose is a multi-container orchestration technology which is very useful for us as we're building a microservices architecture based on multiple containers. So we'll use it on the local workstation to set up our environment. Well, let's take a quick look at the Docker Compose file.
So here, we can see under Services that we've got an entry for each of our projects. So the first three are our service components, Account Service, Inventory Service, and Shopping Service. And then finally, we have our presentation layer component, Store2018. So that's our Docker Compose file. We'll leave it as is there. The next thing we'll do is we need to do some updates on our Dockerfiles. So let's close all of them and we'll work on the first one, which is the Account Service Dockerfile. Okay, the first thing we need to do is change the base tag image that we're using. The current one that's generated by Visual Studio is no longer supported.
So if we jump into our browser. So if you search for this on the internet, this documentation page from Microsoft tells us that the current image names that they're using are here. So we'll take the runtime and we'll update this. We'll leave the working directory as is. We'll add in an environment variable called ASPNETCORE_URLS and we'll set it to be http and the port that we want to listen on. What we're doing here is instructing the in-built Kestrel server to listen on port 5001 across all IPs. So we'll change the port that we expose. In the second stage of our build, we need to update the current build image tag. So again, jumping back into our documentation. The build image tag that we need to take is this one here. So back into Visual Studio.
And we replace the outdated one. Okay, that's good. So next, we'll update our working directory to be /src/AccountService. Next, instead of copying the solution file, we'll instead copy in the project file. So here, this will be AccountService.csproj into the current directory. Since we've done that, we can remove the following line. And then we're good to go in terms of doing a restore. And we copy in the remaining parts of the project directory. We no longer need to set the working directory here. We'll leave it based on the one we're already in. And I'd like to add in the command to see which path I'm currently in as well as do a directory listing before I do the build, just to make sure that the build is happening in the right directory. Moving on. We begin the publish phase and both the build and the publish are doing a release. The final stage in our multi-stage Docker build is to take the compiled contents and deploy them into the app directory and then set an entry point. Okay, so at this stage, this should be a fully working Dockerfile that we can build, so we'll save it. Okay, the next thing we'll do is we'll take a full copy of this and update our other Dockerfiles.
So overwriting. We run this one on port 5002. And the source will be InventoryService. Likewise, we copy the InventoryService proj file. And then when we start up, we wanna start up using the InventoryService.dll. Save that. And repeat again for the ShoppingService. So select all. Paste. The ShoppingService will start up and listen on port 5003. I'll set the working directory to be src/ShoppingService and copy in the ShoppingService.csproj file. And then for this one, we wanna start up on the ShoppingService.dll.
Okay. And finally, we need to repeat this for the presentation component Store2018. So selecting the Dockerfile. Select all, overwrite. And this time, we'll listen on port 5000. And set the working directory to be src/Store2018. And we'll copy in the Store2018 proj file. And when we start up, we wanna start up using the Store2018.dll. Okay, we'll save. So at this stage, all four Dockerfiles should now be buildable. So the next thing we'll do is we'll jump into our Docker Compose file and we'll do some updates. The first thing we'll do is we'll update the image tag names. So at the moment, they default to the name of the project. So you can see here, we've got a project called AccountService and the default name is AccountService for the tags.
But we'll change this to be jeremycook dev/accountservice :latest. Now, the reason we prefix it with jeremycookdev is because I have a Docker hub repo called jeremycookdev, which we'll push the built images into later on in the demonstration. So we'll copy this and repeat it on the InventoryService. Latest. On the ShoppingService. And on the Store2018. Next, I'm gonna paste in a network configuration section. This is the network that each of the containers will be deployed into, allowing them to communicate with each other. Having done that, I need to add in... For each of them, I need to specify the network that they're gonna run on. So in this case, the AccountService container will be given this IP address within this network, which is this network down here. Okay, so we need to copy that and repeat it. This one will run on with IP address 102. This one will run with IP address 103. And this one will run on IP address 100. Okay. We then want to add in our ports configuration to expose ports that containers will be exposing.
So this is gonna expose 5001. And again, we repeat that for each of the containers. This one is also be on 5002. This one will be 5003. And this one will be 5000. So one last thing we need to do. Our Store2018 Docker container expects three environment variables, so we need to set these up. So we do so by specifying the environment. Config. And here, we need to set the account_service_api_base environment variable. And we'll specify it to be http, 172 .19 .10.101. Listening on port 5001/api. So this address comes from the address that was specified here. So we need to repeat this for the other two services. So this will be the Inventory_Service. And that's listening at 102 on port 5002. And finally, the Shopping_Service. And that's listening at 103 on port 5003. So I think we've got a complete Docker Compose file. And we'll save it and jump into our terminal. And we'll do a directory listing. And we need to make sure that we're in the directory that has our docker-compose.yml file, which we are. Okay, so next thing we do is we do docker compose and build. This will build our container images. Okay, so it looks like our build has failed. So let's address this issue. We'll jump back into Visual Studio, into the Docker Compose file.
So what we need to fix is the build context. So instead of it being the current directory, we need to change it to be Services/AccountService for this project. And then having changed the context, we need to specify the Dockerfile rather than the route to it, so it becomes just Dockerfile. So we'll copy this and we do it for each of the projects. So this one becomes Inventory Service. And this one becomes ShoppingService. And then finally for the Store 2018, we set this to be ./Store2018. Okay, we'll save that, jump back into our terminal, and we do another docker compose build. Okay. So it looks like we've successfully got the right settings and we're now in the build phase. So at the moment, the Inventory Service Docker image is being built. That will generally take about half a minute.
And that's completed, so now, we're doing the Store2018. And this is building and again, this should take half a minute as well. And this will repeat for all four images that we're building. Okay, looks like we've run into another error on the Shopping Service component. Now, we updated this, so I would suspect our problem is we didn't save the file. So let's jump back into Visual Studio. And for Shopping Service Dockerfile. And indeed, we can see here that we've got to save it, so we'll just make sure it's saved. Jump back into the terminal and we'll do docker compose build again. And hopefully, this time, we'll complete the build for all four Docker images. So we'll speed it up and see what comes out at the other end. Okay, excellent. So our docker compose build has completed, meaning that all four Docker images have been successfully built. So what we can do now is we can do a docker images. So indeed, we've got a jeremycookdev/AccountService, ShoppingService, and InventoryService, and finally, we have our Store2018 presentation layer Docker image. Okay, we're now ready to fire up our environment. So before we do so, let's just take a look at the local running Docker containers.
And we can see that this is indeed as expected. Okay, next thing we'll do is we'll run docker compose up, which will orchestrate the launching of our environment. So it will spin up each of the four individual containers on a shared network. Okay, so looking at the output, I've noticed one thing that's not quite correct. So we notice here that we're actually listening on port 80, so it appears that our instructions to listen on port 5002 for the Inventory Service have been ignored. So let's jump back into the Dockerfile for Inventory Service and we'll need to fix this. So under Dockerfile. We set up an environment variable called ASPNETCORE_URLS, which instructs it, the port to listen to. So the problem here is that I've missed a colon.
Okay, so we'll save that. And I've probably done the same thing across all files. Yup, so we'll save that. Save that. And save that. Okay, it looks better. We'll do a Ctrl-C to shut down. I'll then bring up Previously Run command. So this one basically looks for all Docker containers with the status of exited and then performs a remove on them. So we'll do that. Run docker ps. Lets it dig in, which is good. So now, we need to do a docker compose and we need to rebuild. So again, we'll speed this up. Okay, excellent.
So all four Docker images have been rebuilt. This time, with the correct listening instruction for the Kestrel web server. So let's now do a docker compose up. And this time, it's looking better. So we can see now that we're listening on the correct ports for each of the four containers. So let's now jump into our browser and we'll navigate to localhost 5000. And if all goes well, we should get our user interface, which we do. So everything has worked. So this is an awesome outcome.
We have a fully containerized microservices environment up and running on our local workstation. Let's now do a quick recap of what we've just accomplished. We've relaunched our microservices re-architected application using Docker containers on our local workstation. So this is the user interface. And let's jump back and have a look at how the communication is happening. So we have four Docker containers, one for each of our main components. We have the three service components, each running as an individual Docker container. And each of these has been web API-enabled, presenting a RESTful API interface for which the user interface, which also runs as a Docker container, makes HTTP calls using an HTTP client programmatically to each of those RESTful API interfaces. Okay, that concludes this lecture. Go ahead and close it and we'll see you shortly in the next one, where we move on and launch our environment on AWS using the Elastic Container Service and Fargate.
About the Author
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.