Using AWS X-Ray to Monitor a Node.js App Deployed With Docker Containers
Start course

AWS X-Ray makes it possible for you to monitor, trace and visualize activity across multiple application touchpoints. 

In this course we will:

  • Introduce the AWS X-Ray service and the functionality that it provides.
  • Explain the functions of the AWS X-Ray service and how to use AWS X-Ray with other AWS services.
  • Demonstrate how to use the AWS X-Ray Console - highlighting key areas such as the Service Map and Tracing windows
  • Demonstrate how to implement a Docker-based Node.js application using the AWS X-ray SDK.

This is an intermediate-level course aimed at AWS professionals looking to learn how to use this important new AWS service in real-world deployments. 

The demo/build files for this course are available here.


- [Instructor] We will now demonstrate a custom microservices architecture Node.js Dockerized application. This application has been instrumented with the AWS Node.js X-Ray SDK. In this demo we will build a microservices architectured calculator. The demonstration will highlight the following course training points: how as a developer you would instrument an application with the AWS X-Ray SDK; how as a SysOps administrator you would use the AWS X-Ray console to navigate and filter over the collected application telemetry. As you can see in the slide, the calculator application is designed using a microservices architecture. The calculator application is composed of several Docker containers. Each individual Docker container hosts a discrete function required to work in concert to support the calculator application to perform calculations. The code base within each Docker container has been instrumented with the X-Ray SDK and during runtime will send telemetry to the X-Ray demon which in turn will forward to the AWS X-Ray service. If you require an introduction to Docker technology, then please consider taking the Docker-related courses here on Cloud Academy. The function of each Docker container, as shown in the previous slide, should be self-evident, with the exception of the POSTFIX container. Regardless, we will provide a quick summary of each container. CALC orchestrates the full calculation. POSTFIX converts expression from INFIX to POSTFIX. ADD performs addition. SUBTRACT performs subtraction. MULTIPLY performs multiplication. DIVIDE performs division. POWER raises the first number by the second number. The function of the POSTFIX container is now explained in extra detail. Essentially, the POSTFIX container converts a mathematical expression from INFIX form to POSTFIX form, the reason being is to implement the logic required to evaluate operator precedent rules, those enshrined in the acronym BEDMAS, representing the order as brackets, followed by exponents division, multiplication, addition and subtraction. The POSTFIX Docker container implements the well-known shunting yard algorithm with the sole purpose to perform this conversion. For those interested in understanding the internals of this algorithm, please visit the link on this slide. In this example, a more complex mathematical expression is converted from INFIX to POSTFIX. Again, this highlights the functionality of the POSTFIX Docker container. Lett's now download the demo application which has been X-Ray instrumented. We will begin by cloning the source code locally from Github. Navigate to the Github URL as seen here. Take a look around. The project is fully documented in greater detail than we will go through in the remainder of this video course, but for now let's copy the repository URL we will clone from this. We will now clone the Github project. Firstly, let's create a new directory to host our project. Inside this directory, we perform a Git clone from the URL that we have just copied. We now have the source code locally. Let's list the contents. Next, move into this new directory and we will list the contents again. The first thing to note is we need to rename the .env.sample file to be .env only. So let's do that. Our directory structure should look like this now, noting the .env file. Before we continue with the installation, let's pause and take a quick look at the code that we have just downloaded. In particular, let's have a look at the instrumentation that is embedded in the code. We will now open up the project code base using visual code. This will allow us to see how the X-Ray instrumentation has been added into the project code base. On the left-hand side is the project structure. We have a number of project folders. Each of these at runtime will be its own Docker container. Let's examine the server.js file. Firstly, we import the X-Ray SDK library. We then set up the sampling rules by calling the set sampling rules with those, specifying the rules configuration file. We then instruct X-Ray that we wanted to capture all downstream HTTPS requests. And finally, we then open the segment. The segment here is named calculator. At the end of this file, we close off the current segment. Each individual service within our microservices architecture has been designed to return a small proportion of warning and error response codes. This has been done to highlight how the X-Ray computed service map renders this type of important information back to the user. This is done for demonstration purposes only. Let's continue with the installation. We will need to complete the following four steps. Step one, we will need to create a new IAM credential. The new credential will give us access to the X-Ray service and to the SQS service. Step two, we will set the required permissions by attaching the following two IAM policies: AWSXrayWriteOnlyAccess, and AmazonSQSFullAccess. Step three, we will create a new AWS SQS queue. We will need to record the SQS URL, which will be recorded within the .env file. Step four, we will update the .env file with our new settings that we have just created. Let's select IAM. Next, click on users and we will add a new user. We will give it the name calc demo and we will select programmatic access. We will now attach two IAM policies. The first policy will be AWSXRayWriteOnlyAccess. The second policy will be AmazonSQSFullAccess. Click Next. Finally, click the create user button. We will need to copy the access key and secret access key. These will be added to the .env file. Now let's create our SQS queue. Select Simple Queue Service. Click the Get Started Now button. First thing, take note of the region you are creating this queue in and give it a queue name. We will call it calclog-syd. To activate the Create Queue button, you need to tab off the queue name input field. Click the Create Queue button. Finally, take note of the SQS URL that we have just created. We will add this into our .env file. We now need to update the .env file. Let's go back to the project code base. Open and edit the .env file. We will update the access key ID, the secret access key and the SQS URL. Save the .env file. Now it's time to build and deploy our application. We will use the command Docker compose build to build the Docker containers followed by Docker compose up to stand up the environment. From within the project root directory, run Docker compose build. This will build and compile our Docker containers. This may take up to 10 minutes if you haven't already downloaded the base Docker images. Now have a look at the container images that have just been built. We do so by calling Docker images. You can see each of the images that have just been created. Let's now stand up the full solution. We do so by calling Docker compose up. This will stand up the environment, creating each required container of the images that were previously built. At this stage, the full solution is now ready to be used. We will now perform our first test of our microservices architecture. Copy one of the sample commands. We will take the second one. Copy that to the clipboard. Now open a new terminal window. Paste the command and press Enter. Straight away, we get a result. The answer should be 64. Go back to the previous terminal and take a quick look at the output of the calculator application. Here you can see several of the individual containers have received and processed messages to perform the overall calculation. The X-Ray demon has collected, batched and delivered the telemetry emitters by each of those containers up to the AWS X-Ray service. Finally, each container has sent messages to the configured SQS queue. Let's run the calculator again, but this time with a more complex expression. In the calculator terminal window, scroll back to where the sample commands are. Copy the last one. Now change back to the other terminal. Paste the command. Before we execute, we will change the CALC ID parameter. Executing the command, we get our answer. The answer should be 43.8. Now go back to the calculator terminal and take a look at what has just happened. We now see that all of the individual containers have invoked to do their part of the calculation. We can also see the different CALC ID identifier, which will be promoted within the CALC Docker container to an X-Ray annotation. We see again that the X-Ray demon has collected, batched and delivered the telemetry emitted by each of those containers up to the AWS X-Ray service. Finally, each container has again sent messages to the configured SQS queue. By firing our test expressions at our calculator app, we will have generated X-Ray data that has been batched and published up to the X-Ray service. Let's take a look at the X-Ray console and in particular, focus on the service map. The visualizations will have been generated by the X-Ray service. Start by clicking on the X-Ray link. This will take us into the X-Ray console. Next, click on the get started button. Now click the Cancel link. X-Ray will now quickly begin to compute the service map visualization. We can now view a map of the services that make up our calculator and in particular, the calls made between the individual services. Let's now fire some more calculations through our calculator app. We will do so to generate extra X-Ray data. This will show up in our service map. Heading back to the X-Ray service map, let's now filter the view down to a data received in the last minute. As you can see, the service map is updated with the latest set of results reflected back in the changing colors. Let's now drill down into the calculator service. We do so by clicking on the calculator segment. The X-Ray console now displays a list of traces collected for the calculator service. Let's now group our traces by Annotation.calcid. Clicking on the second line item, we will see that we have exactly one captured trace. That trace represents the single invocation of the calculator application with annotation CALC ID equal to the value 5768. Let's go back and click on the other trace group captured with annotation CALC ID equal to the value 1234. As you can see, we have multiple traces. Let's pick on this one that has a response code of 503. By drilling in, we get to see the timeline view for the current trace. We can see that the divide service has a fault. Let's drill down further into more detail. Here we can see finer details of the trace request and response such as status code and URL. We also have the ability to look at the annotations that have been set on this. Here, the CALC ID is set to the value 1234. Additionally, we can look at the metadata that has also been supplied. Here the metadata carries the left and right operands, the operator and the calculation result. Finally, any exceptions or stack traces thrown at runtime can be added to the X-Ray trace and are available for viewing in the extinction tab. This information can be very important in terms of assisting and troubleshooting hotspots with the new application.

About the Author
Learning Paths

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).