Jaewoo Song
Jaewoo Song

Categories

  • Tech

Starting with this, I’m gonna post about the personal project developing the “Multi-turn chatbot”.

Actually I began this project about two months ago and was going to write about this earlier, but it was difficult as I modified the codes so many times since I had to go through a lot of trial and error.

The model is still being trained and there are some parts that haven’t even been implemented yet, but I don’t want the post about this delayed too much so I decided to write this article as an introduction to explain the outline of the project.

Today, I will talk about the purpose of this project and rough descriptions on the data used and models.



Multi-turn chatbot

What I’m trying to do is to develop an open-domain chatbot which can generate a response reflecting the multi-turn context, trained on English dialogue datasets.

I think there is probably no one who does not know what a chatbot is, but there might be some people who are not familiar with the word “Multi-turn”.

Let’s assume that when the two speakers, speakers 1 and 2, exchange one utterance each, this is called one turn.

Then easily speaking, the multi-turn would be a case of several such turns.

But this simple definition considers the case that independent turns irrelevant to each other emerge as also a multi-turn case, which is not the one I want to include.

I think the multi-turn dialogue means the one on which a speaker has to understand the previous histories, in other words overall contexts of the conversation, in order to process the utterance in a certain time step.


Here are the examples of a single-turn dialogue and a multi-turn dialogue.

The examples of the single-turn dialogue and multi-turn dialogue.


In a single-turn conversation in which one main topic is finished with one turn, we don’t have to be aware of additional information since we just have to respond to the current input.

However, a multi-turn case like above makes the speakers consider the previous contexts to do a proper action to the current situation.

If the bot cannot remember the information that it has a pet from the dialogue history and sees just the input in the current time step, an error like below will occur.

The difference between context-aware and unaware dialogue.


Therefore, to make a decent multi-turn chatbot, we need to think about not only how to make a proper response according to current input, but also how to refer to the overall contexts of the dialogue properly.



Data Analysis

Next, let’s talk about the datasets.

I decided to combine $4$ multi-turn dialogue datasets to make the data much larger.



Each data has a slightly different feature and purpose, but the common ground is that they are for dialogue training.

More specific information is available from each paper.


Now it is time to analyze the size of each data.

I calculated the number of utterances and dialogues and divided the train set and validation set into $0.85:0.15$ based on the number of conversations.

The results are as follows.

The analysis of dataset size.


Next, in order to set the maximum sequence length and maximum number of histories, I analyzed the distributions of dialogue lengths and utterance lengths.

The dialogue length was calculated by counting the number of tokens after tokenizing with the GPT2 Tokenizer.

Let’s see the below charts.

The bar charts representing the utterance length and the dialogue length distributions.


Based on the above results, we can set the maximum utterance length and the maximum context length (maximum number of previous utterances to consider).

This is a matter of hyperparameters, so it will be handled in the next post.



Models

There are several ways to implement the multi-turn dialogue models.


A traditional method is to use a Recurrent based model, such as RNN, to store the overall context of the dialogue.

Examples of researches using this method include Olabii et al., 2019, Mensio et al., 2018, Chen et al, 2018 (HVMN), Serban et al, 2016 (HRED), etc., which are still presented in most research papers as the baselines.

But these have chronic problems of an RNN like the long-term dependency problem, which is that the information loss emerges if the length of the dialogue becomes too long, and unwanted noises since an RNN takes all conversation histories.


Due to the shortcomings above, the methods using multi-head attention have been proposed these days.

As anyone familiar with the transformer would know, the multi-head attention can greatly alleviate RNN’s problem because it can easily refer to all positions through matrix multiplication and extract only the necessary information through the attention scores.

That is, we can get context vectors through the multi-head attention in utterance or history level and can make better actions by attending this information more efficiently.

These methods include Vlasov et al., 2020 (TED policy) and Zhang et al., 2019 (ReCoSa), etc.

The former was published by RASA, which is a multi-turn action retrieval model especially for Task-Oriented system and the latter is a response generation method I actually implemented, which stands for “the Relevant Contexts with Self-attention”.


Let’s see the details of ReCoSa structure.

It is almost same with the original transformer, but the only difference is the encoder part.

The process of ReCoSa structure to make a context-aware response.


In the above figure, the processes in red happen in the encoder and the blue parts are conducted in the decoder.

The decoder is actually the same as the original since it applies Masked Multi-head attention to the current generated sequence so far and after that attention with the encoder output.

On the other hand, in the encoder, the word-level encoding is conducted by an LSTM.

The last hidden state from this becomes the utterance embedding.

So the turn-level positional encoding is required to reflect the temporal sequence of each turn.

The subsequent course consists of multi-head attention between each turn, similar to the original encoder, where we can obtain the encoder output that reflects the importance of each history.


The next model I’m going to implement is the multi-turn dialogue generation structure using the pre-trained GPT-2 (Generative Pre-Training 2).

At a time when GPT-3 is becoming a hot topic, you might think that GPT-2 is a little behind the trend, but I think it is worth trying since many studies using this model have been conducted over the past year or two.

As the GPT-2 is specialized with Language Modeling, the method is quite intuitive, which is concatenating all previous history utterances and making it generate the next sentence.

That is, unlike other examples conducting context encoding and decoding separately, this approach produces the next output seeing the contexts and the current input at the same time with the self-attention.

Related researches include Zhang et al., 2020, Olabiyi & Mueller., 2019 and Huggingface’s Conversational AI with Transfer Learning, etc.


Especially I will try to implement the fine-tuning method released by Huggingface myself.

Huggingface won the first prize in the automatic metrics category at ConvAI2(Conversational Intelligence Challenge 2) and I think it will be capable since they made public the details through their blog and the GitHub repository.

The description is as follows.

The description of Huggingface's transfer learning structure using GPT-2 for ConvAI2.


The task is to make a model generating dialogues considering both the persona information and the previous histories, but I am only interested in the multi-turn approach so I will exclude the persona part.

The noticeable point is that they trained the model not only on Language Modeling but also on Next Sentence Prediction, which classifies the concatenated sentence at the end is the appropriate reply or the distractor.

So the GPT-2 DoubleHead model which has two different heads to conduct each separated task was used.

By reducing both loss values, the model can have not only proper response generation capacity but also determination on what a natural response should be.

And this is very simliar to BERT’s pre-training method.



This is it for today’s post.

We have checked the analysis of data and the brief details of the models.

Next time, I’m going to talk about the actual implementation codes for the ReCoSa structure and the experience results.



Li, Y., Su, H., Shen, X., Li, W., Cao, Z., & Niu, S. (2017). Dailydialog: A manually labelled multi-turn dialogue dataset. arXiv preprint arXiv:1710.03957. https://arxiv.org/abs/1710.03957.
Rashkin, H., Smith, E. M., Li, M., & Boureau, Y. L. (2018). Towards empathetic open-domain conversation models: A new benchmark and dataset. arXiv preprint arXiv:1811.00207. https://arxiv.org/abs/1811.00207.
Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., & Weston, J. (2018). Personalizing dialogue agents: I have a dog, do you have pets too?. arXiv preprint arXiv:1801.07243. https://arxiv.org/abs/1801.07243.
Smith, E. M., Williamson, M., Shuster, K., Weston, J., & Boureau, Y. L. (2020). Can You Put it All Together: Evaluating Conversational Agents' Ability to Blend Skills. arXiv preprint arXiv:2004.08449. https://arxiv.org/abs/2004.08449.
Olabiyi, O., Khazane, A., Salimov, A., & Mueller, E. T. (2019). An adversarial learning framework for a persona-based multi-turn dialogue model. arXiv preprint arXiv:1905.01992. https://arxiv.org/abs/1905.01992.
Mensio, M., Rizzo, G., & Morisio, M. (2018, April). Multi-turn qa: A rnn contextual approach to intent classification for goal-oriented systems. In Companion Proceedings of the The Web Conference 2018 (pp. 1075-1080). https://dl.acm.org/doi/abs/10.1145/3184558.3191539.
Chen, H., Ren, Z., Tang, J., Zhao, Y. E., & Yin, D. (2018, April). Hierarchical variational memory network for dialogue generation. In Proceedings of the 2018 World Wide Web Conference (pp. 1653-1662). https://dl.acm.org/doi/abs/10.1145/3178876.31860772.
Serban, I. V., Sordoni, A., Lowe, R., Charlin, L., Pineau, J., Courville, A., & Bengio, Y. (2016). A hierarchical latent variable encoder-decoder model for generating dialogues. arXiv preprint arXiv:1605.06069. https://arxiv.org/abs/1605.06069.
Vlasov, V., Mosig, J. E., & Nichol, A. (2019). Dialogue transformers. arXiv preprint arXiv:1910.00486. https://arxiv.org/abs/1910.00486.
Zhang, H., Lan, Y., Pang, L., Guo, J., & Cheng, X. (2019). Recosa: Detecting the relevant contexts with self-attention for multi-turn dialogue generation. arXiv preprint arXiv:1907.05339. https://arxiv.org/abs/1907.05339.
Zhang, Y., Sun, S., Galley, M., Chen, Y. C., Brockett, C., Gao, X., ... & Dolan, B. (2019). Dialogpt: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536. https://arxiv.org/abs/1911.00536.
Olabiyi, O., & Mueller, E. T. (2019). DLGNet: A Transformer-based Model for Dialogue Response Generation. arXiv preprint arXiv:1908.01841. https://arxiv.org/abs/1908.01841.
How to build a State-of-the-Art Conversational AI with Transfer Learning . (2019, May 9). https://medium.com/huggingface/how-to-build-a-state-of-the-art-conversational-ai-with-transfer-learning-2d818ac26313.