Creating a Cognitive Search Service
Enhance Cognitive Search Solution
The course is part of this learning path
This course focuses on the skills necessary to implement a knowledge-mining solution with a focus on the Cognitive Search solution. The course will walk through how to create a Cognitive Search solution and how to set up the process for importing data. Once the data sources have been set up properly, the course will teach you how to create a search index and then how to configure it to provide the best results possible.
- Create a Cognitive Search solution
- Import from data sources
- Create, configure, and test indexes
- Configure AutoComplete and AutoSuggest
- Improve results based on relevance
- Implement synonyms
- Developers who want to include full-text search in their applications
- Data engineers focused on providing better accessibility to organizational data
- AI engineers that provide AI combined with search functionality in their solutions
To get the most out of this course, you should:
- Have a strong understanding of data sources and how data will be needed by users consuming a Cognitive Search solution
- Be able to use REST-based APIs and SDKs to build knowledge-mining solutions on Azure
In this video, we're going to continue to talk about how to enrich our Cognitive Search Solution. And in this case, we're gonna be focusing on how to improve the results that we get from our search queries by making some changes related to relevance. Now, there are three primary ways that we can make relevance improvements to our index or to our query process in order to provide better results.
The first is considered similarity ranking where you actually make modifications to the back-end algorithm that your search service is using. The second is scoring profiles, and we actually saw this a little bit when we were creating our index as it was available in the Azure Portal. And then lastly, is the Semantic ranking which is a preview service, and we'll talk about that definitely in more detail in this particular video.
So first of all, with respect to similarity ranking, there are two types of algorithms that can be implemented. The first is the classical one, which was used by all search services up until January 15th of 2020. The second was Okapi BM25 which was an implementation used since January 15th of 2020. Now, the BM25 algorithm is currently the default.
So when you, if you create a search service inside of Azure today, create an index, it will absolutely use the BM25 algorithm. However, if you had created your search service before January 15th, you would be using the classical. The BM25 is the default because it produces search rankings that align better with user expectations. Therefore, changing the algorithm will of course improve your relevance.
Now, how exactly would you go about doing that if you had in fact created a search service before January 15th? Here's an example where you're gonna actually modify the index because this is not something that can be done in the Azure portal. You have to do it using one of the APIs or making a direct REST call. And in this case, we're actually making update to our index.
We have to of course, pass in the name of the index, the fields of the index, and then pass in a special attribute called similarity where we're specifying that the similarity type will be of the BM25 algorithm. Once you do that, you will then allow for BM25 query algorithms to be used whenever a query is passed in. And this will of course improve relevance moving forward. But this is just only for search services that were created before January 15th.
What other relevance options are there? Well, the next one would be considered scoring profiles. And this was something that I very, very briefly touched on when we created our index. Scoring computes a search score for each item in a rank-ordered results set based on the search term being found in the fields of the index documents. What exactly does that mean? It means that if I type in the word Microsoft, it is going to try and find all terms of the fields that I've chosen with the term Microsoft or Microsoft as a piece of text in any of those documented fields. And then it's going to rank them based on how close to my term does it fall.
Each item in a search result is assigned a score and ranked highest to lowest, that's the default. And then scoring profiles allow you to customize how that scoring is done, meaning that, you can say that you want specific fields to have a higher weighting meaning that if your term is found in those fields, it's gonna rank those documents higher than if it had been found in another field. And this is what a scoring profile would look like from a JSON perspective.
There are two different types of scoring profiles. The first is standard weighting, where you're specifying that a particular field has a higher percentage and will increase the score. The second is using a function. Now, functions would require quite a bit more depth to discuss. You can absolutely take a look at those inside of the documentation. But here at the top, we have a scoring profile called Geo where we are waiting the hotel name field as being a five point higher scoring mechanism when using terms that are found in that field.
The bottom part of the JSON document contains a function that specifies a distance-based algorithm that will say, within a, in this case, the boosting distance of 10, how close to the term is being found, and specifically, within the location field name? Lastly, semantic ranking. This is a preview feature but I fully expect that it will be going GA soon. It's an extension of the query pipeline that improves precision and re-ranks the results based on a completely new type of algorithm. It's backed by an algorithm trained to capture the semantic meaning rather than linguistic meaning.
Now, what exactly does that mean? It means that it's going to look for documents that not only contain the term that you're searching for, but also contain terms that have a contextual relevance to that term. It takes those contextual meanings into account and then re-ranks the scoring results to provide the results that are more of a semantic nature. The key piece of this to keep in mind is, it is resource and time intensive. It is that because, it basically has to go through a second process before returning the results. And here is the exact process and it's called a pre-processing step.
You start with the similarity rank results just as if you had performed a standard query. You then reduce the number of results down to usually the top 50 to make the semantic processing run faster. It then condenses those down to a single string. It makes the algorithm perform faster, and then it trims any long field values out of the single string. This happens because the semantic algorithm is not trying to find sub-strings of your search term but values that contextually match. And it does not need an entire long field value for that to occur. Much like the reduction to a single string, this improves the performance of the algorithm. It then takes the results, passes them through the machine reading comprehension algorithm and provides language representation models to produce the new results of a semantic nature.
Now, let me show you an example of what this would look like. Let's say we're searching on the word, capital. Now, in a standard similarity ranking, it's always gonna look for capital to be in the fields that it is searching against. In a semantic context, it's gonna look for words that have a relevance to the word capital. So in the word capital, it could find a number of items that are related to a location of capital, provinces, states, building, country, because you're looking for the capital of something. But capital also can be referenced into law enforcement where we're talking about crime and punishment because of capital crimes.
It can also be referenced to the seed of government, therefore, tax, money, investments, capital gains, finance. I hope that this is able to show you that the major, major difference between a semantic relevance and a similarity relevance. It's a big difference between just looking for the texts versus looking for contextual references to the term that you're actually searching for and only you and your business owners of your applications will determine which type of relevance is the most important.
Brian has been working in the Cloud space for more than a decade as both a Cloud Architect and Cloud Engineer. He has experience building Application Development, Infrastructure, and AI-based architectures using many different OSS and Non-OSS based technologies. In addition to his work at Cloud Academy, he is always trying to educate customers about how to get started in the cloud with his many blogs and videos. He is currently working as a Lead Azure Engineer in the Public Sector space.