LUIS Overview & Demo
Utterances, Intents & Entities
Training, Testing, and Publishing a Model
The course is part of this learning path
This course focuses on how to use the LUIS (Language Understanding Intelligent Service) portal to create new LUIS models, how to enrich them with intents, entities, and utterances, and how to train and apply apps.
Not only will you get theoretical knowledge of LUIS and its components, but you'll also follow along with demonstrations from the LUIS portal to get a practical understanding of how to use the service.
- Obtain a general understanding of what LUIS is and how to interact with it
- Create LUIS resources
- Learn about utterances, intents, and entities and how they are used in language understanding at a practical level
- Learn how to test, train, and publish your LUIS models
This course is made for developers or architects who would like to know more about how to use the Language Understanding Intelligent Service, LUIS, to improve their chatbot development and experience.
To get the most out of this course, you should have some Azure experience, particularly surrounding subscription and resource groups as well as chatbots and language services. Some developer experience, including familiarity with terms such as REST API and SDKs, would also be beneficial.
So we're back here in the LUIS portal. And now it's time to finally train, test, and publish our model. To do the training. I just need to click here on the train button but notice that the button is disabled. And now I get this message that not all intents have utterances. So click to go to the intents page. And indeed I have no utterances for the known events.
Let's click on it and I'll just add one here, saying hi and that should be enough to fix this problem. Then I'll refresh the screen and notice that the train button is now available, so click on it and now the training has started and that's about it, I don't need you to do anything else.
Once it's done let's now click here on the test button. This opens an interactive dialog box, where I can start typing my utterances. Notice that I also have this link over here to perform batch testing if I want, but the interactive option will be easier for you to see.
Let's use the same sentence I have shown in the beginning of the course. Book me a table for five people at 6:00 PM tomorrow. Then I'll press Enter and as you can see LUIS correctly identified the intent to restaurantsreservation.reserve with over 99% confidence. That's one of the intents imported from the restaurant reservation domain in the earlier demo, let's click on inspect to see what else is there.
Now I have this black dialog box with way more information. I can see that it detected both 6:00 PM and tomorrow as restaurantreservation.time and five as restaurantreservation.number of people. I am also able from this screen to add this utterance to a new intent and I could also compare these results with the published service, but we didn't publish yet, right? Let's fix that, I'll close the task dialog box and click on the publish button. Then I need to select the publishing slot. And this time I'll select the production slot as I won't be using staging at this point. Next I need to select whether a once sentiment analysis or speech priming. My bot will be text-based only. So I won't need speech priming. However, it costs me nothing to use sentiment analysis and it does add value to my chatbot app. So click on change settings and then turn this one on. Then I'll click on done.
After a few seconds, I got this message on the top that the publishing was successful. So let's click on the link provided to see our endpoint URLs. LUIS also gives a sample HTTP request for us to use. So let's copy this code, open a new tab and paste it here at the very end of the request, I have a place holder for my query and I will type the same one as before. Book me a table for five people at 6:00 PM tomorrow, and then press Enter. Great, it is working and now I have a JSON payload with the results.
Let's play with his options a little bit. First, I will change show all intents to false and press Enter to execute. Now you see that only the top intent is being shown. If I switch verbose also to false, now look at how much less information is available. I still can see the sentiment level though and this sentence is pretty neutral at 0.5 sentiment. I also have the option log equal to true If I switch it to false that disables active learning and we probably don't want that as it denies LUIS the opportunity to learn over time.
Let's now switch back to the LUIS portal click again on build and click on review endpoint utterances. Now because log was set to true for that query active learning kicked in and I can see that utterance over here. This first utterance was correctly guessed but the second one was just me pasting the HTTP code without changing the query. So let's click on it, click on this card and confirm it then for the second query, which was correctly guessed, I'll click on the confirm all entity predictions button to save as an utterance.
Finally, If you're a developer that will be creating the calls for the LUIS service, you need to have a good understanding on how to construct the API requests. For that, let's open a new tab and type aka.ms/luis-api-v3. This is the LUIS API documentation. Let's see what we can find here.
On the left side of the screen, I can switch between the various HTTP verbs available such as Gets and Posts. Then on the main screen, we have a list of all Azure datacenters where LUIS available for projection. Next we have information related to how to construct my request URL including parameters and requests headers. Some of these are the ones we played with in the browser, remember?
Finally, we have a few possible response codes depending on the situation. In an ideal world, you're looking to get the response 200 which means that you're successful. You can also see a sample JSON result for this query and that's about it. Your LUIS app is ready to accept requests, so when you're building your chatbot all you need to do is to call the service receive the JSON payload back and create called functions accordingly.
Emilio Melo has been involved in IT projects in over 15 countries, with roles ranging across support, consultancy, teaching, project and department management, and sales—mostly focused on Microsoft software. After 15 years of on-premises experience in infrastructure, data, and collaboration, he became fascinated by Cloud technologies and the incredible transformation potential it brings. His passion outside work is to travel and discover the wonderful things this world has to offer.