Ok so you could argue that I need to get out more…but I was excited to notice yesterday that there is a new feature which has sneaked into the Dialogflow console. This is the concept of a Mega Agent. It’s the ability to set an agent type to mega agent so that you can combine multiple agents into one single agent.
So why is this so important? At The Bot Forge, some of our Dialogflow agents can have 1000’s of intents, particularly if they are providing an information service for a knowledge base. Unfortunately, the knowledge base functionality can be limiting as looked at in my post: Dialogflow Knowledge Connectors so it’s often necessary to create one intent per FAQ to get the required accuracy and control. This can quickly use up an agents 2000 intent limit.
We have recently had to look at creating our own version of a mega agent. This was to be used in a website chatbot implementation which would serve as a gatekeeper to initial enquiries so that we could hand over a conversation to a specific chatbot overseeing a specific knowledge domain. So not really ideal and involving more middleware complexity particularly as we were planning to handle some sort of context between all the agents.
There are some caveats, its still one GCP project and there is a maximum of 10 sub-agents per mega agent.
A Quick look at Mega Agents
It’s also important to remember this feature is in beta! You can read more about setting up the new Mega Agent here. At the time of writing the link on the add agent page is incorrect. I took a really quick look at the new mega agent functionality.
Adding a mega agent
Adding a Mega Agent is pretty straightforward, when you add a new agent then you just select the switch:
Your mega agents are then listed in the agent list:
Adding a sub-agent
Once the agent is selected then a Sub Agent button is enabled:
After selecting the sub-agents button I had already created a test agent to use as my sub-agent so I connected it.
When choosing adding sub-agents you can select an environment or whether to include or exclude the knowledge Base. There is also a handy link to the sub-agent:
My test agent was a simple default agent with one added intent: Does_mega_agent_work with one training phrase “does mega agent work”
Testing it out
So far so good. Just to recap I have created a mega agent and another agent to act as my sub. So now for a test drive of my Mega Agent in the Dialogflow simulator
Unfortunately, I didn’t get the result I hoped for:
This was obviously an IAM permissions issue so I figured probably something which I had not done. I went back to the information page and re-read the section: Set up roles
Basically, to interact with a mega agent in the Dialogflow simulator, the service account that is linked to your mega agent in the Dialogflow Console needs a role with detect intent access for all sub-agents. To achieve this I went to the IAM permissions page for the sub-agent and added the mega agent’s service account email address as a member of the project with a role of Dialogflow API Client.
Going back to the simulator and trying out does mega agent again resulted in the correct response from the sub-agent!
Where to go from here with mega agents.
For me, this is a major step for chatbots which have big numbers of intents > 2000. Or where different teams need to manage a particular knowledge area for one chatbot subject, use-case or topic area.
This post has really only taken a quick view of the new Dialogflow mega agent functionality. In a later post, I want to investigate leveraging contexts between agents and use a more complex example.
There are still some areas which need work though. The biggest one which springs to mind is that the training pages area of the console for a mega agent needs to be able to support the concept of sub-agents to assign sub intents. It’s still just a beta feature so hopefully, more to come!
https://i1.wp.com/www.thebotforge.io/wp-content/uploads/2020/02/Dialogflow_Mega_Agents.png?fit=750%2C650&ssl=1650750ajwthompsonhttps://email@example.com 20:54:442020-02-01 21:18:37Dialogflow Mega Agent
In this post, I’m going to look at the new Knowledge Connectors feature in Google Dialogflow. As I look at the features in more detail I’m assuming you understand the more common Dialogflow terms and features – agents, intents & entities.
It’s also important to remember this feature is in beta.
We’ve been working on chatbot projects for 2 years now and a large number of our chatbot project have shared a similar requirement: the ability to answer a large number of questions on a particular subject. This may be to answer technical questions about a product offering or to offer information for a particular service.
Often the information related to these types of questions is held on our chatbot customer’s own websites as FAQ pages or in specific PDFs or unstructured text documents.These types of knowledge bases can often hold large amounts of information and so technically they will provide answers to thousands of chatbot questions.
The challenge for a successful chatbot is utilising this often unstructured information to understand a question and provide the correct answer. To meet this challenge we can look at 2 approaches; the traditional one and using the new Dialogflow Knowledge Connectors.
Stepping back a bit it’s important to briefly go over the traditional approach to creating chatbot conversational ability. There are a number of different chabot frameworks out there such as Google Dialogflow, IBM Watson, Microsoft Bot, Rasa etc and they all largely use the same concepts. A user submits a voice or text query and this utterance will be matched to an intent and any entities extracted. The matched intent would either provide a static response or rely on some form of application layer to perform the required action to provide the response to the user.
This approach can be easy. However, things can get complex and difficult to manage if the scope of intents is very large and or/ the information is constantly being updated. If we want to support questions with knowledge base information then each question needs to be created as an intent and the correct response formulated. This can lead to problems such as:
Problems with the Intent Classification model growth causing more incorrect classifications.
The amount of effort required to keep adding more training data to the model to ensure that the accuracy of the Intent classification remains high. Fortunately, Dialogflow provides a training UI in the web console to help keep track of any misclassified utterances, analyzing them and adding these to the training data, however, this does take time.
Creating and managing intents to support new information in documents stores.
Enter Knowledge Connectors
Knowledge connectors are a beta feature released in 2019 to complement the traditional intent approach. When your agent doesn’t match an incoming user query to an intent then you can configure your agent to look at the knowledge base(s) for a response.
The knowledge datasource(s) can be a document(currently supported content types are text/csv, text/html, application/pdf, plain text) or a web URL which has been provided to the Dialogflow agent.
Using Knowledge Connectors
To be able to use knowledge connectors, you will need to click “Enable beta features and APIs” on your agent’s settings page.
Its also worth mentioning that Knowledge connector settings are not currently included when exporting, importing, or restoring agents. I’m hoping that this is something currently being put in place by the Dialogflow team.
Knowledge connectors can be configured for your agent either through the web console or using the client library that is available in Java, node.js & python. You can also configure from the command line.
To create a knowledge base from the web console, login to Dialogflow & then go to the knowledge tab. The process is fairly straightforward and involves providing a knowledge base name then adding a document to the knowledge base.
After you’ve done that then you just need to add an intent and return the response. It’s also worth keeping in mind you can send all the usual response types and that means including rich responses which I think is pretty cool.
Trying out knowledge connectors
Ok, so its time to try out these wondrous new knowledge connectors. There are 2 different types of knowledge base document type: FAQ & Extractive Question Answering. These choices govern what type of supported content can be used. There are also a number of caveats for each content type which you can read more about this here
Based on these 2 document types I looked at a couple of common use cases which we often encounter at The Bot Forge and correlate well with the document types supported:
Chatbot FAQ functionality using an existing FAQ webpage in a fairly structured format to provide answers from.
Chatbot FAQ functionality using information in an unstructured format to provide answers from.
I carried out my tests using a blank Dialogflow agent with beta features enabled.
1- An FAQ Knowledge Base (Knowledge Type: FAQ)
For my knowledge base I used the UCAS Frequently asked questions webpage and used the following URL as my data source. This processes the URL which is in the correct format and creates a series of Question/Answer pairs which can be enabled or disabled in the console, pretty neat!
So giving this a spin my first test was “how do I apply” and the result was spot on,
Whilst different variations on the same question also returned a good result.
"im not sure how to apply"
"can you tell me about how I can apply"
Unfortunately, when I try something a bit less obvious. I get an incorrect result as it matches the wrong intent.
"how do I submit my application"
In this case, it’s matching the “How can I make a change to my application” intent with a high confidence but unfortunately it’s the wrong intent. So the problem here is we need to fine-tune the model and re-assign the training phrase (utterance) to the intended intent. The limitation is that in the knowledge base you can’t fine-tune responses. If you want more control you will need to move this faq over to its own intent.
This problem is compounded by the fact that the training feature of the console just lists each response intent as “Default Fallback Intent”. It’s hard to check which responses have been answered incorrectly. One way round is to look in the History area of the console and look at the Raw interaction log of each response.
One really useful feature is that you can assign a specific extracted FAQ from the knowledge document and assign to an intent. Just click on view detail in the document list -> select the question and click the “convert to intents button”. At the same time, this will create a new intent and disable the current Question/Answer pair. So overall pretty impressive if you have webpage or doc of structured FAQs you can use this to power an FAQ chatbot pretty effectively with some monitoring.
2-A more unstructured FAQ Knowledge Base (Knowledge Type: Extractive Question Answering)
In this use case, I wanted to try out the ability of the knowledge connectors to return answers from more unstructured data.
For my test, I used a standard drug leaflet with MIME type PDF covering Priorix, from www.medicines.org.uk. I created a new knowledge base, added a new document and made sure I selected the knowledge type as “Extractive Question Answering”. Once imported the PDF is listed in the document list. My aim was to validate if Dialogflow could extract some fairly simple answers from the document. Now for some testing:
"What is Priorix"
answer : "Priorix, powder and solvent for solution for injection in a pre-filled syringe Measles, Mumps and Rubella vaccine (live)"
Unfortunately, although the response had a high confidence and match score it was actually an incorrect response. Ideally, the answer should have been:
“Priorix is a vaccine for use in children from 9 months up, adolescents and adults to protect them against illnesses caused by measles, mumps and rubella viruses.”
I tried another test:
"how is priorix given"
answer: The other ingredients are: Powder: amino acids, lactose (anhydrous), mannitol, sorbitol
Again this was an incorrect response. I would have expected the correct response to be: “How Priorix is given Priorix is injected under the skin or into the muscle, either in the upper arm or in the outer thigh.”
So unfortunately not great results in extracting answers from the PDF I used. It would be interesting to look at a selection of other types of documents and corpora.
Do Knowledge Connectors work?
Again its important to point out this is a beta feature. There are definitely challenges and in some functional area much more to be done with Knowledge Connects. In conclusion, It’s also important to recognise that I looked at 2 different types of use cases and knowledgebase document types which provided very different results so its worth looking at each one separately.
Chatbot FAQ functionality using an existing FAQ webpage in a fairly structured format.
If you want to convert your FAQ page into a chatbot or if you have a similar structured document such as a PRFAQ for a product or service then Connectors work well.
Just supplying the URL of the FAQ page as a data source to the knowledge connectors is fantastic and provides fairly good results. However, it’s worth keeping in mind there may still be match errors so the history log is invaluable in checking for them. Thankfully it’s fairly easy to manage any question/answer pair which has been handled incorrectly by converting to its own intent.
Chatbot FAQ using a document in an unstructured format.
I found my test results with this use case rather disappointing. The accuracy of the extracted answers was fairly poor for my test case. Although for different document sources you may be able to get better results.
The extracted answers look more like a match based on keywords with some additional coverage but it does not appear to consider the context in which the question is asked. Also, this type of knowledge connector does not provide any full control like intents in terms of context and priority of matching training phrases etc so there is no way of fixing bad responses. A feature where you can evaluate and train responses would be a great addition to the knowledge base so hopefully, that is in the Dialogflow team pipeline.
Should I use Dialogflow Knowledge Connectors?
If you have some FAQ information in a structured format then Knowledge connectors are worth a try with some caveats.
If you have unstructured documents which you want your chatbot to use to extract answers to questions then at the moment knowledge connectors are not a magic bullet. It’s a big ask, but for me, this is where the real value will lie particularly if you want to support large knowledge bases with a chatbot. Knowledge connectors are an experimental feature, so hopefully as the technology advances then they will improve.
We build a lot of different types of chatbots at The Bot Forge and deliver these to a variety of platforms such as Web, Facebook Messenger, Slack or WhatsApp. To create our chat bots we often use different AI platforms which offer more suitable features for a specific project. All the major cloud and open-source providers have adopted similar sets of features for their conversational AI platforms and provide good NLU (Natural Language Understanding). There are also some strong options for open source privately hosted systems.
Conversational AI Platform Features
We wanted to spend some time looking at some of the more popular AI platforms in a bit more depth in this series. To help look at each one we have focused on the following specific features:
API and UI
A conversational AI platform should provide User Interface(UI) tools to plan conversational flow and help train and update the system
As well as intent and entities, a context object allows the system to keep track of context discussed within the conversation, other information about the user’s situation, and where the conversation is up to. This is often the NLP feature which is vital in creating a complex conversation beyond a simple FAQ bot.
Looking at the current position of a conversation, the context and the user’s last utterance with intents and entities all come together as rules to manage the conversational flow. This can be challenging to create and manage so a platforms’ tools in the form of a flow engine, in code and complimented by a visual tool can provide advantages depending on the chatbot project itself. Other features such as slot-filling (ensuring that all the entities for an intent are present, and prompting the user for any that are missing) can be important.
Whilst most platforms fall into this category some systems use machine learning to learn from test conversational data and then create a probabilistic model to control flow. These systems rely on large datasets.
Pre-built channel integrations
Having a conversational platform that supports your target channel out-of-the-box can substantially speed up delivery of a chatbot solution and your flexibility in using the same conversational engine for a different integration. This is one of the reasons we really like Dialogflow’s tooling.
Chatbot Content Types
Whist the focus of a conversational AI platform is understanding pure text, messaging systems and web interfaces often involve other content, such as buttons, images, emojis, URLs and voice input/output. The ability of a platform to support these features is important to create a rich user experience and help to manage the conversational flow.
Bot responses can be enhanced by integrating information from the user with information from internal or external web services. We use this type of ability a lot in creating our chatbots and in our opinion feel its one of the most powerful features of a chatbot solution. With this in mind, the ability to configure calls to external services from within a conversation and use responses to manage conversational flow is important in building chatbot conversations.
Pre Trained Intents and Entities
Instead of creating entity types such as dates, places or currencies for each project some systems provide these pre-trained to deal with complex variations. In the same way, common user intents and utterances such as small-talk is offered pre-trained from some platforms.
Analytics and Logs
The key to creating a successful chatbot is that they need to be constantly trained and monitored. To aid in continuously improving the system once initially launched, the conversational tools should provide a dashboard of the user conversations; showing stats for responses, user interactions and other metrics. Export of these logs is also useful to import into other systems. Other important AI features enable easily training missed intents, catching bad sentiments and monitoring flow.
These are the costs for the cloud hosting and cloud NLU solutions. An important aspect to consider particularly for large scale enterprise chatbots handling large volumes of traffic where NLU costs can reach £1000s a month.
Many providers offer a free tier for their AI platform solutions. A paid for tier will then normally offer enhanced versions of the service with enterprise focused features and support for greater volume and performance. Costs tend to be charged in one of 3 ways, per API call, per conversation or daily active user and also per active monthly user (normally subscriptions are in tiers). We try and look at costs as publicly published for the paid-for plans suitable for enterprise use in a shared public cloud environment.
Keeping all these feature sets in mind we hope to look at the following AI platforms over the coming posts.
Please get in touch if you feel we should look at a platform which we have missed!
Our first AI platform blog post will be coming soon!
Conversational AI technology is not new, but the advanced in the technology has driven a major growth in the industry and what can be achieved in its role solving business problems for many types of industries.
We talk about Conversational AI a lot on our website and blog, after all this technology is at the core of what we do at The Bot Forge. You may well have encountered some of the different terminology used. But what do developers and technologists really mean when they use these terms? Having a simple understanding of some of the more frequently used terms can be useful when thinking and talking about your chatbot or voice assistant strategy. This conversational AI terminology cheatsheet aims to help you understand; no technical knowledge required!
An algorithm is a formula for completing a task. Wikipedia states that an algorithm “is a step-by-step procedure for calculations. Algorithms are used for calculating, automated processing and data processing and provide the foundations for artificial intelligence technology.
Artificial Neural Network
Artificial Neural Networks or ANN are artificial replicas of the biological networks in our brain and are a type of machine learning. Although nowhere near as powerful as our own brains they can still perform complex tasks such as playing chess, for example AlphaZero, the game playing AI created by Google.
AI research and development aims to enable computers to make decisions and solve problems. The term is actually a field of computer science and is used to describe any part of AI technology of which there are 3 main distinctions (1)
Big data describes the large volume of data – both structured and unstructured – that floods through a business and its processes on a day-to-day basis. In the context of AI big data is the fuel which is processed to provide inputs for surfacing patterns and making predictions.
I think we have mentioned these once or twice! A chatbot is a conversational interface powered by AI and specifically NLP. They can be text-based, living in apps such as Facebook Messenger or their interface can use voice-enabled technology such as Amazon Alexa.
Cognitive computing mimics the way the human brain thinks by making use of machine learning techniques. As researchers move closer towards transformative artificial intelligence, cognitive will become increasingly relevant.
Conversational Design/Conversational Designer
Whilst not a technical term its a relatively new role which has grown to being a vital one with the rise in the popularity of conversational experiences. It’s important to understand what this new breed of skilled professional brings to a chatbot project and why they are so important. Conversation design is the art of teaching computers to communicate the way humans do. It’s an area that requires knowledge of UX design, psychology, audio design, linguistics, and copywriting. All of that put together helps chatbot designers create natural conversations that guarantee a good user experience.
Also known as a deep neural network, deep learning uses algorithms to understand data and datasets. Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks. Deep Learning techniques have become popular in solving traditional Natural Language Processing problems like Sentiment Analysis.
Entity and Entity Extraction
Entities are also sometimes referred to as slots. An entity is used for extracting parameter values from natural language inputs. Any important data you want to get from a user’s request will have a corresponding entity. Entity extraction techniques are used to identify and extract different entities. This can be regex extraction, Dictionary extraction, complex pattern-based extraction or statistical extraction. For example, if asked for your favourite colour you would reply “my favourite colour is red”. Dictionary extraction would be used to extract the red for the colour entity. What that means in the real world is types of product, locations, model numbers, parts numbers, courses etc: basically anything related to your business which needs to be understood and extracted from the conversation.
Intelligent Personal Assistants
This term is often used to describe voice-activated assistants which perform tasks for us such as Amazon Alexa, Google Assistant, Siri etc instead of text-based chatbots.
An intent represents a mapping between what a user says and what action should be taken by your chatbot. A good rule of thumb is to have An intent is often named after the action completed for example FindProductInformation, ReportHardWareProblem or FundraisingEnquiry.
Machine Learning or ML for short is probably used by you every day in Google search for example or Facebooks image recognition. ML allows software packages to be more accurate in predicting an outcome without being explicitly programmed. Machine learning algorithms take input data and use statistical analysis to predict an outcome within a given range. Machine learning methods include pattern recognition, natural language processing and data mining.
Natural Language Processing
Natural language processing (NLP) is broadly defined as the automatic manipulation of natural language, like speech and text, by software. NLP is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics to fill the gap between human communication and computer understanding.
Natural Language Understanding
A subfield of NLP called natural language understanding (NLU) has begun to rise in popularity because of its potential in cognitive and AI applications. NLU goes beyond the structural understanding of language to interpret intent, resolve context and word ambiguity, and even generate well-formed human language on its own.
NLU algorithms tackle the extremely complex problem of semantic interpretation. That is understanding the intended meaning of spoken or written language. Advances in NLU are enabling us create more natural conversations.
Sentiment Analysis is the process of determining whether a piece of writing is positive, negative or neutral. More advanced analysis would look at emotional states such as “angry”, “sad”, and “happy”.
An utterance is anything the user says via text or speech. For example, if a user types “what is my favourite colour”, the entire sentence is the utterance.
Conversational IVR is a software system which uses voice commands from customers. This allows them to interact with IVR systems over telephony channels.
Whereas traditional IVR systems had speech recognition technology to handle simple voice commands such as “yes” or “no”. Conversational IVR allows people to communicate their inquiries in more complete phrases via a natural language understanding. Callers can describe questions or concerns in their own words which is then matched to an intent by natural language understanding.
We hope you have found this Conversational AI Terminology Cheat-sheet helpful. If you want to talk about your chatbot project contact us at The Bot Forge
Comment if you think I’ve missed any terms out which should be on the cheat sheet
https://i0.wp.com/www.thebotforge.io/wp-content/uploads/2018/04/The-Non-Technical-Guide-to-Popular-Conversational-AI-Terminology.png?fit=750%2C650&ssl=1650750ajwthompsonhttps://firstname.lastname@example.org 14:36:172021-03-16 10:31:06The Non-Technical Guide to Popular Conversational AI Terminology