CHATBOT AGENCY BLOG
Here you can discover everything about conversational UI and chatbots: best practices, ideas and AI related news brought to you by the Bot Forge team!
Here you can discover everything about conversational UI and chatbots: best practices, ideas and AI related news brought to you by the Bot Forge team!
Ok so you could argue that I need to get out more…but I was excited to notice yesterday that there is a new feature which has sneaked into the Dialogflow console. This is the concept of a Mega Agent. It’s the ability to set an agent type to mega agent so that you can combine multiple agents into one single agent.
So why is this so important? At The Bot Forge, some of our Dialogflow agents can have 1000’s of intents, particularly if they are providing an information service for a knowledge base. Unfortunately, the knowledge base functionality can be limiting as looked at in my post: Dialogflow Knowledge Connectors so it’s often necessary to create one intent per FAQ to get the required accuracy and control. This can quickly use up an agents 2000 intent limit.
We have recently had to look at creating our own version of a mega agent. This was to be used in a website chatbot implementation which would serve as a gatekeeper to initial enquiries so that we could hand over a conversation to a specific chatbot overseeing a specific knowledge domain. So not really ideal and involving more middleware complexity particularly as we were planning to handle some sort of context between all the agents.
There are some caveats, its still one GCP project and there is a maximum of 10 sub-agents per mega agent.
It’s also important to remember this feature is in beta! You can read more about setting up the new Mega Agent here. At the time of writing the link on the add agent page is incorrect.
I took a really quick look at the new mega agent functionality.
Adding a Mega Agent is pretty straightforward, when you add a new agent then you just select the switch:
Your mega agents are then listed in the agent list:
Once the agent is selected then a Sub Agent button is enabled:
After selecting the sub-agents button I had already created a test agent to use as my sub-agent so I connected it.
When choosing adding sub-agents you can select an environment or whether to include or exclude the knowledge Base. There is also a handy link to the sub-agent:
My test agent was a simple default agent with one added intent:
Does_mega_agent_work with one training phrase “does mega agent work”
So far so good. Just to recap I have created a mega agent and another agent to act as my sub. So now for a test drive of my Mega Agent in the Dialogflow simulator
Unfortunately, I didn’t get the result I hoped for:
Basically, to interact with a mega agent in the Dialogflow simulator, the service account that is linked to your mega agent in the Dialogflow Console needs a role with detect intent access for all sub-agents. To achieve this I went to the IAM permissions page for the sub-agent and added the mega agent’s service account email address as a member of the project with a role of Dialogflow API Client.
Going back to the simulator and trying out does mega agent again resulted in the correct response from the sub-agent!
For me, this is a major step for chatbots which have big numbers of intents > 2000. Or where different teams need to manage a particular knowledge area for one chatbot subject, use-case or topic area.
This post has really only taken a quick view of the new Dialogflow mega agent functionality. In a later post, I want to investigate leveraging contexts between agents and use a more complex example.
There are still some areas which need work though. The biggest one which springs to mind is that the training pages area of the console for a mega agent needs to be able to support the concept of sub-agents to assign sub intents. It’s still just a beta feature so hopefully, more to come!
I’m going to look at the challenges in creating a chatbot which can answer questions about its specific domain effectively. In particular, I’m going to look at the challenges and possible solutions in creating a chatbot with a reasonable conversational ability at their initial implementation. Every chatbot project is different but often clients come to us with a large knowledge base which they want a chatbot to support from its release but with very little training data.
We are going to concentrate on a Dialogflow project to look at some examples however the challenges and solution are similar for all the most well know NLP engines, Watson, Rasa, Luis etc.
One of the key problems with modern chatbot generation is that they need large amounts of chatbot training data.
If you want your chatbot to understand a specific intention, you need to provide it with a large number of phrases that convey that intention. In a Dialogflow agent, these training phrases are called utterances and Dialogflow stipulate at least 10 training phrases to each intent.
Depending on the field of application for the chatbot, thousands of inquiries in a specific subject area can be required to make it ready for use with each one of these lines of enquiry needing multiple training phrases.
The training process of an ai powered chatbot means that chatbots learn from each new inquiry. The more requests a chatbot has processed, the better trained it is. The NLU(Natural Language Understanding) is continually improved, and the bot’s detection patterns are refined. Unfortunately, a large number of additional queries are necessary to optimize the bot, working towards the goal of reaching a recognition rate approaching 90-100% often means a long bedding in process of several months.
One of the main issues in today’s chatbots generation is that large amounts of training information are required to match the challenges described previously. You have to give it a large number of phrases that convey your purpose if you want your chatbot to understand a specific intention.
To date, these large training corpus had to be manually generated. This can be a time-consuming job with an associated increase in the cost of the project. One of the main issues we have faced is that often clients want to see quick results in a chatbot implementation. These types of chatbot projects are often use cases which are providing information regarding a wide-ranging domain and may not necessarily have a lot of chat transcripts or emails to work with to create the initial training model. In these cases there is often not enough training data and so it takes time to get decent and accurate match rates.
The Bot Forge offers an artificial training data service to automate training phrase creation for your specific domain or chatbot use-case. Our process will automatically generate intent variation datasets that cover all of the different ways that users from different demographic groups might call the same intent which can be used as the base training for your chatbot.
Multi NLP platform support
Our training data is not restricted solely to Dialogflow agents, the output data can be formatted for the following agent types:
We provide training datasets in 100+ languages
We offer our synthetic training data creation services to our chatbot clients. However, if you already have your own chatbot project and just want to boost its conversational ability we can provide synthetic training data to meet your needs.
We wanted to test the effectiveness of using our synthetic training data in a Dialogflow chatbot agent by varying the number of utterances per intent using our own synthetic training data.
We carried out three different tests (A B and C) with 3 separate Dialogflow agents. Each agent had identical agent settings. The agents had 3 identical intents to provide information about the topic of angel investors:
In the first test (A) the chatbot was trained with 2 hand-tagged training phrases (utterances) per intent. Test (B) had 10 training phrases from our own synthetic training data per intent and test (C) had between 25 and 60 training phrases per intent.
We tested each agent with 12 separate questions similar to but distinct from the ones in the training sets.
We didn’t carry out any training during testing once the chatbots were created.
We recorded the % of queries matched to the correct intent, the incorrect intent or no match and also the intent detection confidence 0.0 (completely uncertain) to 1.0 (completely certain) from the agent response.
% correct match rate
|% incorrect match|
Average Intent Detection Confidence
|Test A (2x utterances)||50%||42%||8%||0.6437837225|
|Test B (10x utterances)||91%||9%||0%||0.7590197883|
|Test C (25-60x utterances)||100%||0%||0%||0.856748325|
Test A provided a 50% match rate. We observed a significant improvement in test B with the introduction of some of our synthetic training data to the agent. We were able to improve the match rate from 41% to 91% whilst TestC with 25-60 training phrases enabled a match rate of 100%. The average intent detection confidence also grew
In summary, chatbots need a decent amount of training data to provide accurate results. If there is not enough training data then a chatbots accuracy is affected and it can take some time to train it whilst being used to reach acceptable performance levels. At the same time, it can be costly and time-consuming to create training data for a chatbot needing to handle large numbers of intents.
Our synthetic training data creation service allows us to create big training sets with no effort thus reducing initial costs in chatbot creation and improving the usability of a chatbot from the initial release stages. If you only have a limited number of training phrases per intent and have large numbers of intents, our service is able to generate the rest of variants needed to go from really poor results to a chatbot with greater levels of accuracy in providing responses. We have carried out these tests with Dialogflow, but our conclusions are relevant for ML-based bot platforms in general. We can conclude that our Artificial Training Data service is able to drastically improve the results of chatbot platforms that are highly dependent on training data
I’ve looked at the benefits of using our training data at the early stages of a chatbot project. However, it’s important to note that the key to success, in the long run, is to constantly monitor your chatbot and continue training to get smarter. Either by doing constant training with human effort or by scheduling regular training cycles, incorporating new utterances and conversations from real users.
If you want to know more about our chatbot training data creation services get in touch
In this post, I’m going to look at the new Knowledge Connectors feature in Google Dialogflow. As I look at the features in more detail I’m assuming you understand the more common Dialogflow terms and features – agents, intents & entities.
It’s also important to remember this feature is in beta.
We’ve been working on chatbot projects for 2 years now and a large number of our chatbot project have shared a similar requirement: the ability to answer a large number of questions on a particular subject. This may be to answer technical questions about a product offering or to offer information for a particular service.
Often the information related to these types of questions is held on our chatbot customer’s own websites as FAQ pages or in specific PDFs or unstructured text documents.These types of knowledge bases can often hold large amounts of information and so technically they will provide answers to thousands of chatbot questions.
The challenge for a successful chatbot is utilising this often unstructured information to understand a question and provide the correct answer. To meet this challenge we can look at 2 approaches; the traditional one and using the new Dialogflow Knowledge Connectors.
Stepping back a bit it’s important to briefly go over the traditional approach to creating chatbot conversational ability. There are a number of different chabot frameworks out there such as Google Dialogflow, IBM Watson, Microsoft Bot, Rasa etc and they all largely use the same concepts. A user submits a voice or text query and this utterance will be matched to an intent and any entities extracted. The matched intent would either provide a static response or rely on some form of application layer to perform the required action to provide the response to the user.
This approach can be easy. However, things can get complex and difficult to manage if the scope of intents is very large and or/ the information is constantly being updated. If we want to support questions with knowledge base information then each question needs to be created as an intent and the correct response formulated. This can lead to problems such as:
Knowledge connectors are a beta feature released in 2019 to complement the traditional intent approach. When your agent doesn’t match an incoming user query to an intent then you can configure your agent to look at the knowledge base(s) for a response.
The knowledge datasource(s) can be a document(currently supported content types are text/csv, text/html, application/pdf, plain text) or a web URL which has been provided to the Dialogflow agent.
To be able to use knowledge connectors, you will need to click “Enable beta features and APIs” on your agent’s settings page.
Its also worth mentioning that Knowledge connector settings are not currently included when exporting, importing, or restoring agents. I’m hoping that this is something currently being put in place by the Dialogflow team.
Knowledge connectors can be configured for your agent either through the web console or using the client library that is available in Java, node.js & python. You can also configure from the command line.
To create a knowledge base from the web console, login to Dialogflow & then go to the knowledge tab. The process is fairly straightforward and involves providing a knowledge base name then adding a document to the knowledge base.
After you’ve done that then you just need to add an intent and return the response. It’s also worth keeping in mind you can send all the usual response types and that means including rich responses which I think is pretty cool.
Ok, so its time to try out these wondrous new knowledge connectors. There are 2 different types of knowledge base document type: FAQ & Extractive Question Answering. These choices govern what type of supported content can be used. There are also a number of caveats for each content type which you can read more about this here
Based on these 2 document types I looked at a couple of common use cases which we often encounter at The Bot Forge and correlate well with the document types supported:
I carried out my tests using a blank Dialogflow agent with beta features enabled.
For my knowledge base I used the UCAS Frequently asked questions webpage and used the following URL as my data source. This processes the URL which is in the correct format and creates a series of Question/Answer pairs which can be enabled or disabled in the console, pretty neat!
So giving this a spin my first test was “how do I apply” and the result was spot on,
matchConfidenceLevel: HIGH matchConfidence: 0.97326803
Whilst different variations on the same question also returned a good result.
"im not sure how to apply" matchConfidenceLevel: HIGH matchConfidence: 0.9685159 "can you tell me about how I can apply" matchConfidenceLevel: HIGH matchConfidence: 0.968346
Unfortunately, when I try something a bit less obvious. I get an incorrect result as it matches the wrong intent.
"how do I submit my application" matchConfidenceLevel: HIGH, matchConfidence: 0.9626459
In this case, it’s matching the “How can I make a change to my application” intent with a high confidence but unfortunately it’s the wrong intent. So the problem here is we need to fine-tune the model and re-assign the training phrase (utterance) to the intended intent. The limitation is that in the knowledge base you can’t fine-tune responses. If you want more control you will need to move this faq over to its own intent.
This problem is compounded by the fact that the training feature of the console just lists each response intent as “Default Fallback Intent”. It’s hard to check which responses have been answered incorrectly. One way round is to look in the History area of the console and look at the Raw interaction log of each response.
One really useful feature is that you can assign a specific extracted FAQ from the knowledge document and assign to an intent. Just click on view detail in the document list -> select the question and click the “convert to intents button”. At the same time, this will create a new intent and disable the current Question/Answer pair. So overall pretty impressive if you have webpage or doc of structured FAQs you can use this to power an FAQ chatbot pretty effectively with some monitoring.
In this use case, I wanted to try out the ability of the knowledge connectors to return answers from more unstructured data.
Again there are caveats about what data source you can use you can read more about this here.
For my test, I used a standard drug leaflet with MIME type PDF covering Priorix, from www.medicines.org.uk. I created a new knowledge base, added a new document and made sure I selected the knowledge type as “Extractive Question Answering”. Once imported the PDF is listed in the document list. My aim was to validate if Dialogflow could extract some fairly simple answers from the document. Now for some testing:
"What is Priorix" matchConfidenceLevel: HIGH matchConfidence": 0.88257504 answer : "Priorix, powder and solvent for solution for injection in a pre-filled syringe Measles, Mumps and Rubella vaccine (live)"
Unfortunately, although the response had a high confidence and match score it was actually an incorrect response. Ideally, the answer should have been:
“Priorix is a vaccine for use in children from 9 months up, adolescents and adults to protect them against illnesses caused by measles, mumps and rubella viruses.”
I tried another test:
"how is priorix given" matchConfidenceLevel: HIGH, matchConfidence: 0.8826 answer: The other ingredients are: Powder: amino acids, lactose (anhydrous), mannitol, sorbitol
Again this was an incorrect response. I would have expected the correct response to be:
“How Priorix is given
Priorix is injected under the skin or into the muscle, either in the upper arm or in the outer thigh.”
So unfortunately not great results in extracting answers from the PDF I used. It would be interesting to look at a selection of other types of documents and corpora.
Again its important to point out this is a beta feature. There are definitely challenges and in some functional area much more to be done with Knowledge Connects. In conclusion, It’s also important to recognise that I looked at 2 different types of use cases and knowledgebase document types which provided very different results so its worth looking at each one separately.
If you want to convert your FAQ page into a chatbot or if you have a similar structured document such as a PRFAQ for a product or service then Connectors work well.
Just supplying the URL of the FAQ page as a data source to the knowledge connectors is fantastic and provides fairly good results. However, it’s worth keeping in mind there may still be match errors so the history log is invaluable in checking for them. Thankfully it’s fairly easy to manage any question/answer pair which has been handled incorrectly by converting to its own intent.
I found my test results with this use case rather disappointing. The accuracy of the extracted answers was fairly poor for my test case. Although for different document sources you may be able to get better results.
The extracted answers look more like a match based on keywords with some additional coverage but it does not appear to consider the context in which the question is asked. Also, this type of knowledge connector does not provide any full control like intents in terms of context and priority of matching training phrases etc so there is no way of fixing bad responses. A feature where you can evaluate and train responses would be a great addition to the knowledge base so hopefully, that is in the Dialogflow team pipeline.
If you have some FAQ information in a structured format then Knowledge connectors are worth a try with some caveats.
If you have unstructured documents which you want your chatbot to use to extract answers to questions then at the moment knowledge connectors are not a magic bullet. It’s a big ask, but for me, this is where the real value will lie particularly if you want to support large knowledge bases with a chatbot. Knowledge connectors are an experimental feature, so hopefully as the technology advances then they will improve.
Developer tools that make it easy to incorporate conversation, language, and search into your applications. Watson gives you access to detailed developer resources that help you get started fast, including documentation and SDKs on GitHub.
There are several IBM Watson APIs available on the IBM Cloud. One of them is IBM Watson Assistant. Watson Assistant enables you to build apps that include natural language processing and structured conversation. The service provides an API which you can call from an app or website to hook into your chatbot.
Watson Assistant API can:
– Extract meaning from natural language
– Discover patterns in data sets
– Understand the “tone” of language
– Translate languages
– Convert text to speech and speech to text
– Perform text classification
– Build a virtual agent (chatbot)
Watson is more of an assistant. It knows when to seek the answer from the knowledge base, when to ask for clarity and when to lead yourself to the human. Watson Assistant can work in any cloud-allowing businesses to bring AI to their data and apps wherever they are.
IBM Watson Assistant is marketed as a solution for companies of any size who want to build their voice or touch-enabled virtual assistant.
To create chatbot using IBM Watson API is mandatory to have a IBM/Bluemix account to start and its free (Lite Version.) Chatbot is built using intents, entities and Watson Developer Cloud to interact with the chatbot.
When we compare IBM Watson with Dialogflow, there is a question, what is better?
If you need a competent Artificial Intelligence Software product for your company you must make time to examine a wide range of alternatives. Aside from the robust features, the software which is simple and intuitive is always the better product.
In 2019, according to some market research, the user satisfaction level for IBM Watson is at 99% while for Dialogflow is at 96%. Both bot frameworks have their pros and cons. Dialogflow and Watson Assistant provide a UI tool to design conversation flow logic for complex dialogues.
Dialogflow provides maybe an easier and quicker way to create a custom conversational AI bot, while IBM Watson offering are targeting more corporations and enterprise organizations. For those who start to learn how to build a chatbot, maybe is better to choose and begin with Dialogflow.
Watson conversation is expensive compared to Dialogflow, while development interface in Dialogflow could have been better. Dialogflow bot for website integration does not support buttons and links while Watson Assistant for web integrations supports buttons and links usage.
Watson Assistant and Dialogflow integrate with variety of other popular platforms and systems.
Watson is not a single thing. Watson is a collection of APIs that can be used to solve various challenges and Watson Assistant is part of it. Many senior developers think that today there’s nothing on market like Watson Assistant.
With the proper expectations and in the proper hands, Watson’s APIs can be used to do some really phenomenal stuff.
More about Watson Assistant you can read at official IBM website: https://www.ibm.com/cloud/watson-assistant/
Dialogflow have extended the V1 API shutdown deadline to March 31st, 2020. https://cloud.google.com/dialogflow/docs/release-notes#November_14_2019
Winter is coming! (for any Game of Thrones fans this will make perfect sense!) In October last year, we wrote about the news that Google will be dropping support for V1 of the Dialogflow REST API in Oct 2019. We’ve been building all our chatbots with V2 since last year, however, there are many companies who still have V1 Dialogflow agents which will need to be transferred. This blog post aims to help you with carrying out your migration successfully.
The amount of work needed will really depend on what features your Dialogflow agent is using and where it’s integrated: If you are using Dialogflow’s fulfillment webhook, inline editor, or any Dialogflow API, you’ll need to update your code, endpoints, and/or fulfillment to be compatible with V2. However If you are certain your existing agent doesn’t use the fulfillment webhook library, the Dialogflow API, or any integrations, then you will not need to make any major changes before selecting V2.
Due to authentication changes, the biggest impact will be for Dialogflow web agent implementations which are currently calling the REST API.
This post will be split this 2 sections: a basic migration guide for agents not using the REST API and a more advanced version covering what changes are needed to use the new REST API and what changes need to be made to support authentication.
You can see more details about upgrading from V1 to V2 in the official guide here.
Anyone who already has built out their website chatbots using v1 API, then they should start planning for the migration sooner rather than later. Any new features should be added after the upgrade. The migration is potentially a non-trivial task, considering some chatbots have some fairly complex code driving their fulfilment. If you have a live bot in production our advice is to set up an upgrade chatbot as a copy of your existing bot project and then work through the upgrade there. You can guarantee that changing to V2 will mean that fulfilment and API calls may stop working. Once the upgrade is complete re-testing all bot functionality is strongly advised before setting live.
We would recommend everyone who is creating custom website chatbots to do so using the v2 API. All our new chatbots are built using the v2API.
If you need assistance or advice with your own chatbot v2 upgrade please get in touch, we are Dialogflow experts and would be happy to help!
0800 061 4082