Skip to playerSkip to main contentSkip to footer
  • today
Welcome to this hands-on AI-900 lab session, where we explore how to build intelligent conversational AI experiences using Azure AI Services! Whether you're preparing for the Microsoft AI-900 Certification or looking to integrate AI-powered chat and voice interactions into your applications, this step-by-step demo will guide you through the process.

🔍 What You’ll Learn in This Video:
1️⃣ Introduction to Conversational AI & Azure AI Services
2️⃣ Setting up Azure Bot Services & Language Studio
3️⃣ Building and configuring a chatbot to understand and respond naturally
4️⃣ Integrating AI-powered language understanding (LUIS & Azure OpenAI)
5️⃣ Deploying and testing your AI-powered conversation bot
6️⃣ Real-world applications for customer support, automation, and virtual assistants

🛠️ Who Is This For?
Beginners exploring Conversational AI & Chatbots
Professionals preparing for the Microsoft AI-900 Certification
Developers and businesses looking to automate interactions using AI

📌 Key Highlights:
✅ No-code/low-code chatbot development
✅ AI-powered language understanding for natural conversations
✅ Step-by-step guide to Azure AI Chatbot & Virtual Assistant setup
✅ Use cases for industries like customer service, healthcare, and e-commerce

💡 Start your journey into AI-powered conversations with Azure today!

Explore Our Other Azure Courses and Practice material on: https://www.youtube.com/@skilltechclub

Category

🤖
Tech
Transcript
00:00Hi guys, good morning, good afternoon, and good evening. Welcome back to SkillTech Club and this
00:14is going to provide you a daily dose of AI and Azure learning. My name is Maruti and I'm very
00:20happy to welcome you on this particular video which is going to be very very interesting. Now
00:25I am assuming that you are following our videos and you are watching language
00:30service step-by-step kind of a guide and you are watching different Azure AI
00:35services with step-by-step hands-on demos. Now in this particular video today we
00:40are going to focus on conversational language understanding. This is a very
00:44important concept of natural language processing. If you are not familiar with
00:49what kind of capabilities NLP natural language processing is providing I
00:53strongly recommend you to visit this video. And inside that video I have
00:58already explained that there are three things which you need to understand
01:01utterances, entity, and intent. Basically conversational language understanding is
01:07helping you to understand what is an intent of the user. So when user is going
01:12to say something, type something in the chatbot, or maybe he is going to just give
01:17you some kind of an input. When user is providing this kind of an input a computer
01:23or an AI system has to understand what is an intent of that user and based on that
01:28it has to give a solution. Conversational language understanding is exactly
01:33focusing on this. If I simplify this process, let's say when you try to
01:38communicate with someone, you try to pitch your point, you try to explain what you
01:43are actually looking for and the other person understand that. Same way if computer
01:48system has to understand this, you need to design your conversational language
01:52understanding in such a way that computers understand that. Now how exactly we are
01:57going to do this thing, that's what we are going to see step by step in this
02:01particular video. In order to save my and your time, I have already created a
02:06language service. As you can see right now, I'm already on my Azure portal. This is the
02:10same language service which we have created earlier in the previous lab and
02:14that was actually focusing on custom question-answering solution. If you do not
02:19know how to create a language service in Azure portal, I request you to check out
02:23that video. But as of now, a language service is successfully created. I have a
02:27language studio in which I have already connected my email, I have already
02:32connected my account with that. All I have to do is I have to create a new
02:36project. Now when I click on the create new project option, there are multiple types of
02:41projects which are available. Right now we are going to select conversational
02:45language understanding which is actually showing you here in the notes that you
02:49can build natural language into apps, bots and IoT devices. Basically, this can
02:54also be used with IoT devices because your modern IoT devices or your modern Alexa or
03:00Google, they're actually working based on your commands when you say something. How
03:04they understand the intent of the user? Well, they're using similar kind of a
03:08conceptual technology. Let's see this thing. I'm going to choose conversational
03:12language understanding. I am going to give a proper name. Let's say I'm giving a
03:18name of this project smart home. Now don't worry, we are not going to build up any
03:25smart home but like a smart home devices, we want to create a some kind of a
03:29scenario with that. I'm just going to put simple home automation. So this is going to
03:35be there. I do not want to enable multilingual project right now and my default
03:41language is going to be English only. So I'm okay with all the configuration. I'm
03:45going to click on next and create. Now once this project is successfully created,
03:52I want you to focus on this left side column. This is showing me two important
03:56steps. First, we have schema definition and we have data labeling. These two are
04:01actually going to help you to build up your conversational language
04:04understanding intent entity and labeling. And this is all three which we are going to
04:09see right now. As you can see right now, we do not have any intents. We do not have
04:13any entities also. And what we will do is we are going to
04:17build up some intent first. Now when you want to know the user's
04:21intent, you need to understand through what user is saying or typing.
04:26So maybe user is going to provide some input text or maybe he is going to say
04:29something through the voice. Whatever user is trying to provide in that
04:34particular statement, that statement itself is actually known as utterances.
04:38Now there can be multiple utterances which user is going to say.
04:41And based on that, you have to identify the intent of the user.
04:46Now if this is all a little confusing for you,
04:48I request you to please go through that video and just be with me for next 5-10
04:53minutes and you'll understand what exactly
04:55utterance and intent is all about. Now in the intent section, I do not have any
05:00intent but I want to add one new intent. And let's say the intent is actually
05:09switch on and I'm going to click on add intent.
05:14And I want one more intent which is actually switch off.
05:22So maybe my user is going to have an intent of
05:25switching on something or switching off something. That's what I'm trying to put
05:28across. When I go into switch on, I'm clicking inside this particular
05:35intent and then it's showing me that, okay, you should add a training set so that we can
05:39train the model on this. And with that, you can actually do data labeling. So basically,
05:43the moment I click and go inside that, it's actually going to take me to this data labeling section.
05:49I'm inside the data labeling section and I'm going to click on skip all. And now they're asking me,
05:55select your intent which is switch on and add your utterances. This utterances will be those
06:00statement which user might say or type. And that's why I'm going to add couple of utterances here.
06:07Let's say user is going to say switch on the fan. That's my first utterance. Or he's going to say
06:13put the fan on. Second one. Or maybe he's going to say put the light on. Or he's going to say switch
06:25on the light. Switch on the light. Or maybe I'm going to add one more, which is turn the fan
06:33on. Remember, because this is switch on kind of an intent, we are just giving different different
06:38things here. And we're just saying that these are all going to be turned on whenever user is going
06:43to say something like this. Now remember, as of now, this particular training set do not know what
06:48is the fan, do not know what is the light. So we will do something for that also. Just hang on. But
06:54right now, my intent is switch on. I've done this thing. I'm going to click on save changes.
07:01Now I'm going back to my schema definition. This is showing me switch on is having five labeled
07:06utterances in that. And now I'm going to go into switch off. Similarly, I'm going to create some
07:13utterances for my switch off intent. Turn the light off. Next, switch off the fan. Put the fan off.
07:23Put the light off. Turn off the light. Switch the fan off. Now I have added all this in switch off intent.
07:30These are my multiple utterances. You can add as many utterances as you want.
07:35And this is how it is. I'm going to click on save changes once again.
07:40So basically, up to this point, we have just prepared one thing, which is creating our two
07:46intent and then adding some utterances inside that. Now even though this is showing me labeled utterances,
07:52these, these utterances are actually not labeled because we have not connected this with any particular
07:56label. In order to do that, I need to do one more step, which is entities. Entities are going to be
08:02those elements which are connected with your intent and utterances. Like for example, when we said,
08:08switch the fan off. Fan is actually that entity which we want to turn off or turn on. Same way,
08:15light is also that entity. Now we need to associate this entities with our utterances. And that is what
08:21we have to do when we configure this. Now in order to do this thing in the add entity option.
08:27Now in order to do this thing, in this particular entities, we are going to click on add. And we are
08:32going to say that the new entity which we want to create is actually device.
08:36So this is a new kind of an entity which we are giving here. Now entities can also have multiple
08:43types. It can be learn entity, it can be list entity, something like weekdays, it can be a prebuilt
08:49entity, something like email or a date time or something like that. Or it can be any regular
08:54expression which you can also add as an entity. I'm going to choose right now learn because I have to
08:59teach my training data set that what exactly this entity is. So I'm going to click on learn entity right now,
09:05or I can click on list also, I think let me go with list so that I can create a list of the elements
09:10here. And then I'm going to click on add entity. Because we can have list of devices in this,
09:16I'm going to choose that one. In this, we have an option that okay, if you're creating a list of the
09:23entity, why don't you just add a new list and you add a configuration of that, I'm going to add a new
09:29list. And I can actually add a value also here, like let's say I'm going to add a value that
09:35I'm going to add phone. Now I know phone is not there in our utterances, but I'm just showing you
09:40that we can add this kind of a list here, which is actually going to be selected with that. And then
09:46we can also add synonyms of that, like phone can also call like mobile or cell phone.
09:53And then we can add one more item in this, by saving this. And then we can add one more list
10:11into this. So this is how you can configure that actually. Now in my case, when we are dealing with
10:16this, I really do not have any phone kind of an utterance, which is associated with that. So
10:23I really don't need this list. I'm just going to select this, and I'm going to click on delete.
10:28So I actually do not want this phone kind of a entity in my case. I'm just going to delete this
10:33thing. Basically, this is my device kind of an entity, which I have created. And it is of type list,
10:38but I do not want to add any list right now inside this. What exactly I'm going to do? Well,
10:43once I have an entity configured here, the device is available here. I'm going to link this particular
10:49device entity with my utterances. And that's where I'm going to click on intents. I'm going to click
10:55on switch on. And in the switch on, I'm going to select my fan. And the moment I select this fan word
11:04in this utterance, I'm going to select device. The moment I do this, this is going to identify that,
11:09oh, this is actually one of the device. Same way, I'm going to say light is also one of the device.
11:17So I'm just selecting light and fan as a device. Same way you can have many utterance, which is
11:27actually a phone also. So you can add all those things into that. And it will be adding them as a
11:32list. Now let's say I want to add one more in switch on. And I'm going to say turn
11:42the mobile on.
11:47We normally do not see this, but yeah, this is turned the mobile on. And then I'm going to add this.
11:52And while adding this also, you can say that, okay, mobile is also nothing but one of the device.
11:56This is how you can connect this. I'm going to click on save changes. And the moment I save this,
12:03is showing me that, okay, device has actually six learn entity inside that. Now we are good with this.
12:09I'm going back to my schema definition. We'll click on switch off now. And we'll select the same way.
12:16Fan is a device. Light is a device. And we do not have any other thing. So this is fine. I'm going to
12:27click on save changes. So my learning entities are done. It's actually showing me fan and light kind
12:34of a thing. We have intent to intent. We have six, six utterances in each one of them. And we have one
12:42entity, which is device. And because we are trying to create a smart home, maybe in future, we want to
12:46add more devices into that. Now when we do this thing, this is how we have to create your training
12:51data set. And that's what we have done right now. Once your training data set is successfully created,
12:56you have to focus on training that particular job, training that particular model. And then you have
13:02to make sure that your model is performing well. This is what we do in any machine learning kind of
13:07programming. Now let me click on training jobs. Now logically, I have to provide so many utterances,
13:15multiple entities, and multiple schema also associated with that, with the labels.
13:20But because this is a very short video, I'm not going into in depth in that. I'm going to click
13:25on start a training job. Of course, my model is not going to be perfect right now. But it's just to give
13:30you a sneak peek that how exactly this is going to work, we are doing this. It's going to show me that,
13:35okay, train a new model name, I'm going to say, smart home model. This is the name of my model,
13:44is going to be a standard training, which is going to be free. If you go for advanced training,
13:48you can do the customization with your algorithms and other things. But I'm not going with that right
13:52now. Standard training is going to be free as well as very fast. So I'm going to go with this.
13:57It's asking me what kind of a split ratio you want to configure for this 80% data is going to be used
14:02for training 20% is used for testing. Anyhow, we have very less data. So we are not really worried about
14:08this. I'll click on train and it's going to take some time to train this model. It's going to be
14:13very quick because we are doing a free training right now. And it's very soon going to show me
14:19that your status of the model training is actually completed. Of course, this model will not be even
14:25neared by to the perfect model. But yeah, it's just for the sake of checking this, whether it's working
14:30or not, we are just training this model. Yes, with this, my model training is successfully completed.
14:36As you can see, it's showing me training succeeded. This means that my model is successfully trained.
14:42Now let me click on model performance section, where I can see that my model is there, I can click
14:48on that model. And this is actually going to show me a precision score, recall score, and some other
14:54model performance related metrics. Like you can see right now, they are clearly giving me a warning
14:58that we do not have enough intent labels in this training set. Of course, as I said, the data is not
15:04enough. But they have still trained this thing. If I click on the other tab, like model performance,
15:09it's showing me that switch on switch off kind of an intent are successfully trained. If we click on
15:15training test set, they are actually going to show me that light kind of a thing is actually selected
15:20as a device distribution data set, they are giving me warning, not enough training set is available,
15:25we have to add more training set inside that. And in the confusion metrics, I'll be able to see
15:32values which are associated with the confusion metrics. Now I'm not going very in depth in this,
15:35because we are not learning machine learning right now. But I'm happy that my model is successfully
15:40trained, the F1 score, precision score, and recall score is actually 100%, which means that it is
15:46actually able to identify these two intent, which I'm trying to configure. Now, in order to use this,
15:53we have to go to the next step, which is deploying a model. Obviously, without deploying a model,
15:58you cannot use it. So I'm going to click on add deployment. I'm going to create a new deployment,
16:05the name of the deployment will be
16:10sh deploy. So this is a smart home deploy, which model I want to choose in this my smart home model,
16:17and then it's going to be selected in this region. And in the service, I'm going to click on deploy.
16:23Deployment is also going to take some time. The deployment is successfully created,
16:28it's showing me sh deploy is available here. If I select the deploy model, I can actually get a
16:33prediction URL. This is going to generate a URL, and you can taste this URL by just posting a data
16:41using a curl. Now, I do not want to do this thing. I want to taste my deployment, whether it's working or
16:46not. And that far, I'm going into the taste deployments tab. This is the last and final tab,
16:52which we are going to see right now. So we need to check whatever configuration we have done,
16:56is working or not. I'll choose my deployed model, which is sh deploy. And in this particular text,
17:03I have to put something. Let's say I'm saying, turn the fan on. And then I'm going to click on run the
17:13test. When I see this, they are saying the top intent is switch on. They're 100% sure about this.
17:19And they also identified that there is one device associated with this, which is fan.
17:23Now, let's say I'm going to say, turn the phone on. Remember, phone word was not used in any of the
17:31labels. I have used mobile phone was not there. Still, if I run the test, it's going to show me that
17:37the top intent is switch on. And phone is actually device. Now, I hope you agree with me, there was no
17:44word like TV. I'm saying turn the TV on. And I'm going to run the test. And this time also, it's
17:51saying that yes, top intent is switch on and device is TV. And they are very 100% sure that the TV is
17:58actually nothing but a device. Now how exactly this model is knowing this thing? Well, that's what the
18:03smartness of your language service. Based on this, you can identify what is actually the intent of
18:10your user. Remember one thing, identifying the intent of the user is the goal of conversational
18:16language understanding model. Remember, once you identify the intent, what exactly you want to do
18:22next, like how exactly you're going to turn on the TV, or how exactly you're going to turn on the lights.
18:27That's not the purpose. That's not the thing which you have to do inside the language service.
18:32Language service is just going to help you to identify the intent. After that, your developers
18:37has to maybe write an additional IoT specific code and turn on the light, or maybe turn on the TV.
18:43This is something which how it is. Same way, let me focus on switch off my fan. I'm just saying
18:56this kind of a statement, which is not matching with the exact utterance. And if I run the test,
19:00you will see something that sometimes when you're giving this kind of statements, the top intent,
19:06they are saying it switch off confidence is not 100%. It is little lesser than that, because they are
19:11not very sure this is the exact match or not. And that is the reason this is a place where you have
19:15to add more examples, you have to add more utterances when you are creating any particular intent.
19:21The more data you provide, your model is going to be very, very sure, very, very confident about
19:26deciding that particular intent and entity. You can see right now in the entities, they are very
19:32sure, but in the intent, they are not very sure. It's not 100%. And that accuracy is very important.
19:38So please do not take it for granted. Add more data, the more data you have, your model is going to be
19:45better. With this, I think that's it for the day. I hope you liked this video. Thank you so much.
19:51This is your friend Maruti. I'll see you tomorrow. Happy learning to all of you.

Recommended