Amazon Lex -Deep Dive
Creating a Lex Bot
The course is part of these learning paths
In this lecture we’ll walk you through the steps required to get a fully functioning Amazon Lex demo up and running. We will guide you through the process of creating a Chatbot interface that allows a user to start and stop a set of EC2 instances. The user will be able to either chat or vocalise commands such as “start my servers please” or “stop my servers please”. The Chatbot in turn will ask the user which specific EC2 instances they would like to start or stop. The user will be able to respond by stating the type of instance, where type is a metadata tag applied to all instances. Finally the Chatbot will ask for confirmation to proceed or not.
Once the cloud is finished navigate into the project root folder and list the directory contents. Here we can see that our Lambda Project consists of several files of subdirectories. The key file here is the lambda.py Python file. This file contains the source code that we need to package and then upload into our Lambda function. Let's take a closer look at this file. We do so by starting visual code within the current directory. Selecting the lambda.py file, we can now take a closer inspection of the Python code.
First point of interest. We're reporting and leveraging the excellent Boto 3 library to interact with the EC2 API. Note that the AWS Lambda service run time has the Boto 3 library installed by default. Next, we're expecting an environment variable named Instance Regent to configured on the Lambda function itself. The Instance Regent represents the AWS Regent in which our EC2 service had been launched into.
Next, we're expecting to work with two Amazon Lex intents named start instances and stop instances. We'll soon configure these intents as we go along. The entry point for our Python Lambda function is the Lambda Handler function. The Lambda Handler function passes execution onto the dispatch function. The first thing the dispatch function does is to extract the incoming intent name, then determines whether we're starting or stopping our instances, calling the appropriate next function. If we receive an unexpected intent, we raise and throw an exception.
Stepping into the EC2 instances start function, we extract the server type slot to determine which set of servers we're going to start up. The server type slot will be set to either red, green or blue. We then make a connection to the EC2 API using the Boto 3 library. We create a filter against a type tag and set its value to the instance tag we earlier extracted. We make a call to the EC2 describe instances API, passing in the filter. The result should be a filtered set of EC2 instances which have their type tag set to the passed-in value. We then gather up the respective instance IDs and call the start instances API.
Finally, if all goes well without any interruption and/or error, we conclude by calling the close function. The close function simply passes back on an expected dialogue action message. The dialogue action message sets the type to close and the fulfillment state to fulfilled. Finally, taking a quick look at the reverse stop function, EC2 instances stop, we can see that it shares the same logic as the equivalent start function, except that where the start function calls into the EC2 stop instances API, the stop function calls into the EC2 stop instances API. Right, let's move on.
Swapping back to the terminal, we now need to zip up and package the lambda.py file. We do so by entering the following command,
zip lambda.zip lambda.py. Jumping back into the Lambda console, we click on the upload button and select the zip file we just created. Next, we need to update the handler configuration. Here we set the handler to be the concatenation of the Lambda file name minus the .py extension, together with the name of the Lambda entry function—in this case, the Lambda handler function—giving us lambda.lambda_handler. Next, we expand the environment variable section. We need to add the instance region environment variable as per the expectation of our Lambda function. We set the corresponding value to be the region into which we earlier launched our EC2 instances. In this case, we launched into the North Virginia region. Therefore, the value is us-east-1. The final setting we need to adjust is the time out value for the Lambda function. Scrolling down, we increase the default value from three seconds to one minute. That should give the Lambda function more than enough time to complete.
Next, we highlight the IAM role that the Lambda function executes under. Recall that, in our case, the IAM role was named lambda_basic_execution. The reason we do so is that we'll need to update this IAM role and give it extra permissions to start and stop our EC2 instances. But for now, let's just go ahead and save our Lambda function. If all goes well, the zip file containing our Python source code will be uploaded and un-packaged, allowing to see the source code inline, which we can, so this is good.
Next, let's configure a set of test events. This will allow us to test the Lambda function independent of the Lex setup and configuration. Select the configure test events option next to the test button. We set start red instances for the event name and then copy and paste the example JSON from the lambda.startinstances.red.JSON file from within our Lambda project code base.
Here we highlight that our test message is set to use the start instances intent, with the slot server type set to red. We repeat this process again for each of the other slot server types—blue and green—and again we repeat the same process for all three slot server types, red, blue, green, but for the stop instances intent.
At the end of this setup, we should have six test events: three for starting instances and three for stopping instances. Before we execute any of the test events, we need to go and make the Lambda IAM role update we briefly mentioned earlier. Heading over to the IAM service, select the roles item and filter on Lambda. Within the filtered results below, click the Lambda basic execution role. Expanding the currently attached policy, you can quickly see that the only permissions it currently has is for operations on cloud watch logs. We now need to add EC2 permissions to allow it to start and stop our instances. We do so by attaching the Amazon EC2 full access policy. This gives us more permissions than we require but for simplicity of the demonstration, it will suffice. Clicking the attach policy button adds the policy to our role. We can now jump back into the Lambda console and execute some test events against our configured Lambda function. Before we do so, let's take a quick look at the current state of our EC2 instances.
Here you can see that all six instances are in a running state. Let's kick off our test to stop the red tag instances. We do so by selecting the stop red instances test event, then click the test button. Excellent! It looks like the test succeeded as per the execution result message. Expanding the details section, we can clearly see that our Lambda function is completed with success, as conveyed by the returned dialogue action message.
Let's jump back into the EC2 console. Here we can see the previous state the red instance was running. If we now refresh, we see that the two red instances are stopping as expected. This is perfect. Let's go ahead and test stopping the green and blue instances, respectively testing the stop green instances and stop blue instances. As has been seen, all tests have completed successfully and all six instances are now in a stopped state. This is a very good indicator that the configuration of Lambda function is working and appropriate. We can now move on and complete the last piece of the puzzle. In this case, the Amazon Lex bot.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.