Training data for chatbots.
I'm going to look at the challenges in creating a chatbot which can answer questions about its specific domain effectively. In particular, I'm going to look at the challenges and possible solutions in creating a chatbot with a reasonable conversational ability at their initial implementation. Every chatbot project is different but often clients come to us with a large knowledge base which they want a chatbot to support from its release but with very little training data.
We are going to concentrate on a Dialogflow project to look at some examples however the challenges and solution are similar for all the most well know NLP engines, Watson, Rasa, Luis etc.
One of the key problems with modern chatbot generation is that they need large amounts of chatbot training data.
If you want your chatbot to understand a specific intention, you need to provide it with a large number of phrases that convey that intention. In a Dialogflow agent, these training phrases are called utterances and Dialogflow stipulate at least 10 training phrases to each intent.
Depending on the field of application for the chatbot, thousands of inquiries in a specific subject area can be required to make it ready for use with each one of these lines of enquiry needing multiple training phrases.
The training process of an ai powered chatbot means that chatbots learn from each new inquiry. The more requests a chatbot has processed, the better trained it is. The NLU(Natural Language Understanding) is continually improved, and the bot’s detection patterns are refined. Unfortunately, a large number of additional queries are necessary to optimize the bot, working towards the goal of reaching a recognition rate approaching 90-100% often means a long bedding in process of several months.
One of the main issues in today's chatbots generation is that large amounts of training information are required to match the challenges described previously. You have to give it a large number of phrases that convey your purpose if you want your chatbot to understand a specific intention.
To date, these large training corpus had to be manually generated. This can be a time-consuming job with an associated increase in the cost of the project. One of the main issues we have faced is that often clients want to see quick results in a chatbot implementation. These types of chatbot projects are often use cases which are providing information regarding a wide-ranging domain and may not necessarily have a lot of chat transcripts or emails to work with to create the initial training model. In these cases there is often not enough training data and so it takes time to get decent and accurate match rates.
THE BOT FORGE PROVIDES
CHATBOT TRAINING DATA
The Bot Forge offers an artificial training data service to automate training phrase creation for your specific domain or chatbot use-case. Our process will automatically generate intent variation datasets that cover all of the different ways that users from different demographic groups might call the same intent which can be used as the base training for your chatbot.
Multi NLP platform support
Our training data is not restricted solely to Dialogflow agents, the output data can be formatted for the following agent types:
- rasa: Rasa JSON format
- luis: LUIS JSON format
- witai: Wit.ai JSON format
- watson: Watson JSON format
- lex: Lex JSON format
- dialogflow: Dialogflow JSON format
We provide training datasets in 100+ languages
We offer our synthetic training data creation services to our chatbot clients. However, if you already have your own chatbot project and just want to boost its conversational ability we can provide synthetic training data to meet your needs.
Testing the Solution
We wanted to test the effectiveness of using our synthetic training data in a Dialogflow chatbot agent by varying the number of utterances per intent using our own synthetic training data.
Dialogflow test agents
We carried out three different tests (A B and C) with 3 separate Dialogflow agents. Each agent had identical agent settings. The agents had 3 identical intents to provide information about the topic of angel investors:
In the first test (A) the chatbot was trained with 2 hand-tagged training phrases (utterances) per intent. Test (B) had 10 training phrases from our own synthetic training data per intent and test (C) had between 25 and 60 training phrases per intent.
We tested each agent with 12 separate questions similar to but distinct from the ones in the training sets.
We didn't carry out any training during testing once the chatbots were created.
We recorded the % of queries matched to the correct intent, the incorrect intent or no match and also the intent detection confidence 0.0 (completely uncertain) to 1.0 (completely certain) from the agent response.
Overall test results
% correct match rate
|% incorrect match
Average Intent Detection Confidence
|Test A (2x utterances)
|Test B (10x utterances)
|Test C (25-60x utterances)
Test A provided a 50% match rate. We observed a significant improvement in test B with the introduction of some of our synthetic training data to the agent. We were able to improve the match rate from 41% to 91% whilst TestC with 25-60 training phrases enabled a match rate of 100%. The average intent detection confidence also grew
In summary, chatbots need a decent amount of training data to provide accurate results. If there is not enough training data then a chatbots accuracy is affected and it can take some time to train it whilst being used to reach acceptable performance levels. At the same time, it can be costly and time-consuming to create training data for a chatbot needing to handle large numbers of intents.
Our synthetic training data creation service allows us to create big training sets with no effort thus reducing initial costs in chatbot creation and improving the usability of a chatbot from the initial release stages. If you only have a limited number of training phrases per intent and have large numbers of intents, our service is able to generate the rest of variants needed to go from really poor results to a chatbot with greater levels of accuracy in providing responses. We have carried out these tests with Dialogflow, but our conclusions are relevant for ML-based bot platforms in general. We can conclude that our Artificial Training Data service is able to drastically improve the results of chatbot platforms that are highly dependent on training data
Chatbot Training Never Ends!
I've looked at the benefits of using our training data at the early stages of a chatbot project. However, it's important to note that the key to success, in the long run, is to constantly monitor your chatbot and continue training to get smarter. Either by doing constant training with human effort or by scheduling regular training cycles, incorporating new utterances and conversations from real users.
If you want to know more about our chatbot training data creation services get in touch
About The Bot Forge
Consistently named as one of the top-ranked AI companies in the UK, The Bot Forge is a UK-based agency that specialises in chatbot & voice assistant design, development and optimisation.
If you'd like a no-obligation chat to discuss your project with one of our team, please book a free consultation.