Container Orchestration
Start course
1h 18m

In this course, we will learn the concepts of microservice and spring framework with a focus on Cloud.

Learning Objectives

  • Learn about Cloud

Intended Audience

  • Beginner Java developers
  • Java developers interested in learning how to Build and Deploy RESTful Web Services
  • Java Developers who want to develop web applications using the Spring framework
  • Java Developers who want to develop web applications with microservices
  • Java Developers who wish to develop Spring Boot Microservices with Spring Cloud


  • Basic Java knowledge

Hello dear friends. In this lesson, we will learn about container orchestration and perform some fundamental operations. First, we need to package our applications. We have two microservice applications: system and inventory. We run the command mvn clean package. Okay, war packages have been created. Now we will pull Open Liberty's runtime package for Java 11. Our services will run on that Open Liberty runtime. I'm creating a Docker container for the system service. And now, I am creating the container for inventory service. We will see the images by running the docker images command. All right, both are created. I will define tags for newly created services before deploying them to the IBM Cloud. First, define a tag for inventory and now define a tag for system service. I pushed them to the Cloud, so I can use them easily. First, I push the inventory container. When pushing I use the tag value. I am pushing the system container. Okay, both are pushed.

Everything is ready for deploying our microservices on Kubernetes. I'm creating a Kubernetes file using the touch command. Open the file with the nano tool to edit. This file includes deployment and service definitions. As you can see, there are system and inventory deployment definitions at the beginning of the file. You can see that the kind is deployment and here the kind is service. The service part looks like a Docker file and the deployment part is a higher level definition. The NodePort definition enables you to reach the pods using these ports when you run them. If you don't define a port, it will get NodePorts automatically. Save the file. I just wanted to show you the usage of NodePort definitions. I will remove it using the sed command. Besides, I replaced the container names with the new ones we had pushed earlier. Now we can deploy our services by running the kubectl apply commands. We will point to the file we've just created. Let's list the pods. Run kubectl get pods.

We can get a detailed definition of pods with the kubectl describe pods command. As you see here, detailed information about pods. We can connect our pods. To do this, we need to run the kubectl proxy command. We run the proxy command. Let's open a new terminal to connect pods. We define an environment variable for system service pods. We put the proxy port at the beginning, it is 8001. After that, we add the generic container name, define a variable for inventory. Check them with the echo command. Let's make a curl for our system pod. As you see, here your runtime and platform information. And now, list the information about the inventory pod. This is header knowledge about system service. There's an approach for deploying pods. It's called rolling update. It allows us to update one pod at a time in a multi-pod scaled environment, so it'll be a soft deployment without downtime.

To do this, we change our kubernetes.yaml file, open the location of the Kubernetes file, delete the content and paste the updated text. As you can see, we have added a strategy pod. In this pod, we define the rolling update specification. Save the changes, remove node pods and replace container names with generic ones using the sed command. We deploy the new file with the kubectl apply command. Check again pods. All is running. For now, we have one pod for each container. We can create new pods for containers and scale them up easily. For example, let's create three pods for system service. To do this, we use the kubectl scale command. Here, we define three replicas, which means three pods will be running. Okay, let's check. Yes, as you can see, there are three pods in the system container and we can scale them down that easily. Okay, dear friends, that's all I want to say about Kubernetes for now. I hope to see you in the next lesson.


About the Author
Learning Paths

OAK Academy is made up of tech experts who have been in the sector for years and years and are deeply rooted in the tech world. They specialize in critical areas like cybersecurity, coding, IT, game development, app monetization, and mobile development.