Amazon Lex -Deep Dive
Creating a Lex Bot
The course is part of these learning paths
In this Amazon Lex course, you will be guided through an in-depth study of the Amazon Lex service. We review where and when to use this service to best effect. We'll go over Chatbots in general and why they have become both useful and popular. You will be introduced to the key features and core components within the Amazon Lex service. We spend time understanding and reviewing Amazon Lex Bots, Intents, Utterances, Slots, and Slot Types.
We focus on the developer workflow and how Amazon Lex integrates seamlessly with other AWS services. We take a look at and review the capabilities of the Amazon Lex API and associated SDKs. We review Versioning and Aliases and how they facilitate development within the Amazon Lex service. Finally, we walk you through building a fully functional chatbot implemented using the Amazon Lex service, which when completed will allow you to start and stop EC2 instances.
- Understand the basic principles of Amazon Lex for building Conversational Interfaces
- Learn how to effectively use Amazon Lex to manage and maintain your own chatbots
- Recognize and explain how to perform all basic Amazon Lex related tasks such as configuring Intents, Slots, and Lambda functions (code hooks)
- Understand IAM security permissions required for Amazon Lex to interact with other AWS services such as EC2
- Be able to competently manage Amazon Lex using the AWS console
- AWS Administrators
- Software Developers and Engineers
To be able to get the most out of this course we recommend having a basic understanding of:
- AWS Lambda
- Software Development
- Basic understanding of Python
The example code used within this course can be found here:
Related Training Content
After completing this course we recommend taking the 'Introduction to Amazon Rekognition' course.
To discover more content like this, you will find all of our training in the Cloud Academy Content Training Library.
Let's pause for a moment here and review what we've just accomplished. We have conversed with a chatbot implemented within Amazon Lex, which, in turn, has fulfilled our request for a Python Lambda function. That eventuated in our EC2 instances starting. This is a great result and shows how chatbots can be utilized to perform all sorts of tasks. Keep in mind that, although we've conversed with our chatbot via typed commands, that this is not the only way to engage with the chatbot. We'll soon see we can actually speak our commands to the chatbot and achieve the same results.
Next, we'll leverage Amazon Polly to synthesize our commands into wave files. We'll then interact with the Amazon Lex API using the terminal. Let's head back to our terminal. Here we navigate into the test subdirectory. Listing the contents of this directory, we have two files: one which contains the API commands for starting our instances and the other which contains API commands for stopping our instances.
Let's jump into visual code and take a look at the file containing the API commands to stop instances. Starting from the very top, we copy the first code block. This code block makes a call to the Amazon Polly Service API to synthesize the text "stop my instances please" into an MP3 file. Let's copy this and execute from within the terminal. Next we copy the following command. This just plays back the generated MP3 file so that we can hear its content.
- [Program Output] Stop my instances please.
Next, we copy the FFmpeg command to convert the MP3 file into PCM format. Currently, the Amazon Lex Service API does not support posting MP3 files, hence the conversion to PCM format. We then copy the command to post the PCM file to our StartStop instances bot. The StartStop instances bot processes the incoming PCM file and matches it against one of the utterances registered within the StopInstances intent. The chatbot response is to elicit the server type stop as can be seen in the response JSON, as in, we're requested to state which instance type we'd like to stop. We continue by using the Amazon Polly service again to synthesize the text red. Let's copy the respective command and execute it from within the terminal. We then play back the resulting MP3 file so we can hear its content.
- [Program Output] Red.
We use FFmpeg again to convert from MP3 to PCM format, posting the resulting PCM file to our StartStop instances Lex bot. The StartStop instances bot processes the incoming PCM file and then sends back a confirmation request as can be seen here in the response JSON. We continue by using the Amazon Polly service one more time. This time to synthesize the text: yes. Let's copy the respective command and execute it from within the terminal. We then play back the resulting MP3 file so that we can hear its content.
- [Program Output] Yes.
We use FFmpeg again to convert from MP3 to PCM format. This time, before we post the resulting PCM file to our StartStop instances bot, let's jump back over to the EC2 instances console to check on the status of our EC2 instances. Here, we can see that, in particular, the EC2 red instances are in a running state. Let's proceed and submit the yes confirmation PCM file to our StartStop instances bot. The StartStop instances bot processes the incoming PCM file and kicks off the Lambda function in the background to stop the red EC2 instances. This can be inferred from the response that we now see within the terminal. Jumping back into the EC2 console, let's refresh and check that the red instances are shutting down and indeed they are. Great stuff!
As a final test of our fully working chatbot demonstration, within the test bot pane we'll use the microphone capabilities and our voice instructions. Jump back into the Lex service console. Firstly, click on the microphone icon to activate the microphone and speak the words: start my instances please. When prompted speak "red" into the microphone, followed by "yes". Again, the StartStop instances bot will restart the red EC2 service. We can confirm this by checking in on the EC2 service console. Refreshing the view, we see that, indeed, the red instances have been restarted—very cool.
Let's finally review different methods in which we conversed with our chatbot. First, we conversed with the StartStop instances bot using text-based commands. Next, we used the Polly and Lex APIs together to interact with the StartStop instances bot from the terminal. Finally, we used a microphone to speak our commands. Regardless of communication method, the end result was the same, that we were able to use the StartStop instances bot to control the starting and stopping of our EC2 instances. That concludes this demonstration and lecture. In the next lecture, we review what we've learnt during this entire course. Go ahead and complete this lecture and we'll see you in the next one.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).