Previous post: https://songstudio.info/tech/tech-38
In the last post, we discussed the training objective and data pre-processing procedure for pre-training the DialogueSentenceBERT.
This time, I will elaborate on the details of fine-tuning tasks for model comparison and the performances of each model after evaluations.
Let us begin.
In this project, two tasks are implemented to evaluate each sentence embedding model’s performance.
These are intent detection and system action prediction, both of which are text classification problems and predict one or more suitable classes using an embedding vector of each utterance produced by the model and an additional classification layer.
As mentioned in the previous post, sentence vectors can be pooled from the
"[CLS]" position or by mean/max pooling.
Intent detection is a multi-class classification task which literally catches the user’s intent/purpose from an utterance.
System action prediction is a problem where a model should predict proper system actions to conduct given a dialogue context as an input, and it is a multi-label classification since these predicted actions might be multiple.
Moreover, since the datasets for action prediction consist of multi-turn dialogues, I conducted experiments by increasing the number of previous utterances in order to see the difference in model performance when handling multi-turn histories.
This illustration will help you understand of building each fine-tuning task.
This increases the learning rate during the warm-up steps linearly and decays it linearly until the training finishes.
Also, the batch size is $16$, and the gradient clipping is set to $1.0$.
The rest of the hyper-parameters are a little different between each dataset, which are omitted in this post and included in the file additionally attached at the end of the post.
Since Backing77 and ATIS do not have separate validation sets, I sampled 10% of the train set as the validation set.
Since these intent detection datasets have single-turn utterances, there is no need for multi-turn consideration when pre-processing.
For intent detection, all samples are processed by attaching the
"[CLS]" token to the front, and the
"[SEP]" token to the back.
For evaluation, I used the simple accuracy score for these datasets.
However, since the OOS dataset has special out-of-scope intent, which is one of the main focuses of this dataset, additional metrics were also calculated, including in-scope accuracy, out-of-scope accuracy, and out-of-scope recall score.
The scripts for these metrics were brought from the ToD-BERT’s official repository.
System action prediction
As I mentioned before, since there are multiple actions which a system can take for each user input, this task is a multi-label classification.
In addition, I made each input sequence by concatenating a certain number of recent utterances starting from the query utterance from a user in order to include the multi-turn context.
The maximum number of utterances included can be $1$, $3$, or $5$, which is a hyper-parameter.
The concatenation details for this are identical to those of pre-training data.
Moreover, since the MultiWoZ 2.3 does not have separate train, validation and test sets, I split the total dialogues randomly with the ratio of $8$: $1$: $1$.
For evaluation, I used $4$ evaluation metrics, which are sample-averaged F1 score, micro-averaged F1 score, macro-averaged F1 score, and exact accuracy.
Each metric is explained in my previous post, “Averaging methods for F1 score calculation in multi-label classification”.
You can check the details here.
Although I experimented with all 3 pooling strategies using the baseline models, I only report the results from the “[CLS]” and mean pooling, since the max pooling did not produce satisfactory performances compared to others and I was able to obtain the pre-trained model with “[CLS]” and mean pooling only, due to limited resources and time.
In addition, I marked the score in bold if that is the best, and in red only if the score is from the pre-trained DialogueSentenceBERT.
Also, I wrote the rank of my model next to the score to check how it works well compared with other models.
First, the results of intent detection are as follows.
First of all, ConvBERT tops almost all metrics with overwhelming performances, which shows how effective pre-training with the masked language modeling (MLM) using large amounts of conversational data is.
ToD-BERT’s scores are lower than those of the original BERT in “[CLS]” pooling, but overall, it achieves higher scores in mean pooling.
Also, it is noticeable that although SentenceBERT is not intended for transfer learning and is trained on an NLI task, it shows quite great performances.
DialogueSentenceBERT, which I have focused on, does not perform well for most of the evaluation standards, ranking $4$~$5$, and especially the results of Banking77 data are the worst.
Although the scores for ATIS data look fine, it is difficult to say that it is meaningful since most of the scores are not significantly different.
In addition, it is important to see the difference in performance according to the pooling method depending on each metric, data, and model.
Overall, using “CLS” pooling seems better than adopting mean pooling, but in some cases, the scores are almost similar, and especially in the case of ToD-BERT, mean pooling scored overwhelmingly higher in OOS data.
Next, I report the results of system action prediction datasets as follows.
The first image is the results from
"[CLS]" pooling, and the second one is from mean pooling.
The results are more complicated than before.
My model is still not satisfactory, but is partially effective in this task, resulting in the best score in several metrics.
Especially, the performances are noticeable in DSTC2 with the
"[CLS]" pooling, and in Simulated data with the mean pooling.
Unfortunately, the DialogueSentenceBERT is worse than I expected in Multiwoz, not only compared with other competent models but also with the original BERT.
In addition, ToD-BERT, which showed modest results for the intent detection task, performs quite well in this task enough to match ConvBERT.
This can be attributed to the pre-training objective of ToD-BERT, which is the MLM and respond selection tasks with multi-turn TODS datasets.
Since these tasks seem more similar to action prediction with the multi-turn context and the pre-train data is also more related to the data adopted for the action prediction task than the intent detection data is.
Of course, the performances of ConvBERT are still amazing.
Also, in this task, SentenceBERT produces very good results compared to the basic BERT.
In fact, it is inferior to BERT in a few metrics and this might be because the limit of SentenceBERT, which is trained on an NLI task, stands out this time, since the action prediction is more complicated and difficult than the intent detection.
Another thing we should focus on is the increase of scores according to the number of maximum turns.
Although this is quite obvious, in MultiWoZ, we can see that the differences according to the increase of turns are not as large as those in other data, which is in fact, some scores reversely decrease especially when the number of turns changes from $3$ to $5$.
On the other hand, inDSTC2, the effects of multi-turn contexts start to grow and they are very large in Simulated data.
Considering that MultiWoZ data generally has much longer utterances than other data, excessive information seems to cause deterioration.
In other words, in the case of DSTC2 and Simulated, which consist of short and simple utterances, the concatenation of a larger number of previous utterances gives a positive effect, but in MultiWoZ, including too many tokens might result in the dispersion of attention to each token, leading to decrease of focus on important information.
In conclusion, the pre-trained model showed a lot of progress compared to the last attempt.
Unlike the previous approach, which was inferior to the baselines in almost all scores, this one rose to a level close to other models this time and was able to take the lead in some scores.
However, the results were still below expectations, and the main reason might be the ambiguous decision boundary between the positive and the negative samples.
As I mentioned in the last post, I trained the model to detect the similarity between two utterances or contexts by considering the samples from the same script file as a similar pair and those from the different files as a distant pair.
However, I think this was a too naive approach.
In more detail, even if two utterances are sampled from the same subtitle file, they might represent entirely different subjects or contexts depending on the length or the flow of the movie.
Likewise, we cannot clearly say that the utterances from different subtitles are necessarily negative, since both movies might contain similar genres or contents.
Therefore, there might be a quite amount of samples that are contrary to my purpose which maps semantically similar utterances closer and more distant for the opposite cases.
It would have been better to extract all positive pairs by sampling two consecutive utterances entirely and to make negative samples by classifying the genre or theme of each movie more clearly.
So far, I have introduced the challenge for DialogueSentenceBERT, which has been newly improved.
In the end, I still didn’t get satisfactory results, but many improvements were made, and I also think it was time to learn more knowledge and know-how.
It is not clear whether I will conduct this project again, but if I come up with another new idea or get good feedback, I will try to have a chance to improve it again.
Thank you for visiting this page and I always welcome your feedback and opinions.
The link to the GitHub repository of this project is as follows.