Chatbots and automated assistants can take on many use cases and be deployed into a number of different platforms to support your customers.

They can be integrated into any platforms such as website, Facebook Messenger, WhatsApp, Slack, Microsoft Teams.

Request your free consultation

The Rapid Response Virtual Agent program includes open source templates for companies to add coronavirus content to their own chatbots.

Artificial intelligence and machine learning are continuing to take a front-row seat in fighting COVID-19, with Google Cloud launching an AI chatbot on Wednesday. The chatbot, which it calls the Rapid Response Virtual Agent program, will provide information to battle the COVID-19 pandemic, as announced in a Google blog.

The program will Google Cloud customers to respond more quickly to questions from their own customers about the coronavirus. It’s designed for organizations who need to be able to provide information related to the COVID-19 pandemic to their customers, such as government agencies, healthcare and public health organizations, as well as travel, financial services and retail industries.

Google also offers Contact Center AI for 24/7 self-service support on COVID-19 questions via a chatbot or over the phone. Google also allows for businesses to add COVID-19 content to their own virtual agents with the ability to integrate open-source templates from organizations that have already launched similar initiatives. For instance, Verily partnered with Google Cloud to launch the Pathfinder virtual agent template for health systems and hospitals. It enables customers to create chat or voice bots that answer questions about COVID-19 symptoms and provide guidance from public health authorities such as the Centers for Disease Control and Prevention and World Health Organization (WHO), according to the Google blog.

The Contact Center AI’s Rapid Response Virtual Agent program is available in any of the 23 languages supported by Dialogflow.

Google has provided a template to rapidly create a Dialogflow agent: You can find the template here. There is also documentation on how to build and deploy a virtual agent, whether voice or chat.

We’ve been looking in more detail at this template and created our own chatbot. This is a work in progress and will be something which we are updating and improving daily. You can interact with this chatbot in the bottom right of this page.

 

 

Training data for chatbots.

I’m going to look at the challenges in creating a chatbot which can answer questions about its specific domain effectively. In particular, I’m going to look at the challenges and possible solutions in creating a chatbot with a reasonable conversational ability at their initial implementation.  Every chatbot project is different but often clients come to us with a large knowledge base which they want a chatbot to support from its release but with very little training data.

We are going to concentrate on a Dialogflow project to look at some examples however the challenges and solution are similar for all the most well know NLP engines, Watson, Rasa, Luis etc.

The Challenge

Chatbots need training data highlighting image

One of the key problems with modern chatbot generation is that they need large amounts of chatbot training data.
If you want your chatbot to understand a specific intention, you need to provide it with a large number of phrases that convey that intention. In a Dialogflow agent, these training phrases are called utterances and Dialogflow stipulate at least 10 training phrases to each intent.

Depending on the field of application for the chatbot, thousands of inquiries in a specific subject area can be required to make it ready for use with each one of these lines of enquiry needing multiple training phrases.

The training process of an ai powered chatbot means that chatbots learn from each new inquiry. The more requests a chatbot has processed, the better trained it is. The NLU(Natural Language Understanding) is continually improved, and the bot’s detection patterns are refined. Unfortunately, a large number of additional queries are necessary to optimize the bot, working towards the goal of reaching a recognition rate approaching 90-100% often means a long bedding in process of several months.

Data Scarcity

One of the main issues in today’s chatbots generation is that large amounts of training information are required to match the challenges described previously. You have to give it a large number of phrases that convey your purpose if you want your chatbot to understand a specific intention.

To date, these large training corpus had to be manually generated. This can be a time-consuming job with an associated increase in the cost of the project. One of the main issues we have faced is that often clients want to see quick results in a chatbot implementation. These types of chatbot projects are often use cases which are providing information regarding a wide-ranging domain and may not necessarily have a lot of chat transcripts or emails to work with to create the initial training model. In these cases there is often not enough training data and so it takes time to get decent and accurate match rates.

The Solution

Chatbot training data

THE BOT FORGE PROVIDES
CHATBOT TRAINING DATA
CREATION SERVICES

The Bot Forge offers an artificial training data service to automate training phrase creation for your specific domain or chatbot use-case. Our process will automatically generate intent variation datasets that cover all of the different ways that users from different demographic groups might call the same intent which can be used as the base training for your chatbot.

Multi NLP platform support
Multi-language support

Our training data is not restricted solely to Dialogflow agents, the output data can be formatted for the following agent types:

  • rasa: Rasa JSON format
  • luis: LUIS JSON format
  • witai: Wit.ai JSON format
  • watson: Watson JSON format
  • lex: Lex JSON format
  • dialogflow: Dialogflow JSON format

We provide training datasets in 100+ languages

We offer our synthetic training data creation services to our chatbot clients. However, if you already have your own chatbot project and just want to boost its conversational ability we can provide synthetic training data to meet your needs.

Testing the Solution

Chatbot interface picture

We wanted to test the effectiveness of using our synthetic training data in a Dialogflow chatbot agent by varying the number of utterances per intent using our own synthetic training data.

Dialogflow test agents

We carried out three different tests (A B and C) with 3 separate Dialogflow agents. Each agent had identical agent settings. The agents had 3 identical intents to provide information about the topic of angel investors:

  • what_is_an_angel_investor
  • what_percentage_do_angel_investors_want
  • do_angel_investors_seek_control

In the first test (A) the chatbot was trained with 2 hand-tagged training phrases (utterances) per intent. Test (B) had 10 training phrases from our own synthetic training data per intent and test (C) had between 25 and 60 training phrases per intent.

The Test

We tested each agent with 12 separate questions similar to but distinct from the ones in the training sets.

We didn’t carry out any training during testing once the chatbots were created.

We recorded the % of queries matched to the correct intent, the incorrect intent or no match and also the intent detection confidence 0.0 (completely uncertain) to  1.0 (completely certain) from the agent response.

Overall test results

View the results here

% correct match rate

% incorrect match

%no match

Average Intent Detection Confidence

Test A (2x utterances)50%42%8%0.6437837225
Test B (10x  utterances)91%9%0%0.7590197883
Test C (25-60x utterances)100%0%0%0.856748325

Test A provided a 50% match rate. We observed a significant improvement in test B with the introduction of some of our synthetic training data to the agent. We were able to improve the match rate from 41% to 91% whilst TestC with 25-60 training phrases enabled a match rate of 100%. The average intent detection confidence also grew

In summary, chatbots need a decent amount of training data to provide accurate results. If there is not enough training data then a chatbots accuracy is affected and it can take some time to train it whilst being used to reach acceptable performance levels. At the same time, it can be costly and time-consuming to create training data for a chatbot needing to handle large numbers of intents.

Our synthetic training data creation service allows us to create big training sets with no effort thus reducing initial costs in chatbot creation and improving the usability of a chatbot from the initial release stages. If you only have a limited number of training phrases per intent and have large numbers of intents, our service is able to generate the rest of variants needed to go from really poor results to a chatbot with greater levels of accuracy in providing responses. We have carried out these tests with Dialogflow, but our conclusions are relevant for ML-based bot platforms in general. We can conclude that our Artificial Training Data service is able to drastically improve the results of chatbot platforms that are highly dependent on training data

Chatbot Training Never Ends!

I’ve looked at the benefits of using our training data at the early stages of a chatbot project. However, it’s important to note that the key to success, in the long run, is to constantly monitor your chatbot and continue training to get smarter. Either by doing constant training with human effort or by scheduling regular training cycles, incorporating new utterances and conversations from real users.

If you want to know more about our chatbot training data creation services get in touch

Appendix

View the test results here