TR2018-161
Adversarial Training and Decoding Strategies for End-to-end Neural Conversation Models
-
- "Adversarial Training and Decoding Strategies for End-to-end Neural Conversation Models", Computer Speech and Language, DOI: 10.1016/j.csl.2018.08.006, Vol. 54, pp. 122-139, December 2018.BibTeX TR2018-161 PDF
- @article{Hori2018dec2,
- author = {Hori, Takaaki and Wang, Wen and Koji, Yusuke and Hori, Chiori and Harsham, Bret A. and Hershey, John},
- title = {Adversarial Training and Decoding Strategies for End-to-end Neural Conversation Models},
- journal = {Computer Speech and Language},
- year = 2018,
- volume = 54,
- pages = {122--139},
- month = dec,
- publisher = {Elsevier},
- doi = {10.1016/j.csl.2018.08.006},
- url = {https://www.merl.com/publications/TR2018-161}
- }
,
- "Adversarial Training and Decoding Strategies for End-to-end Neural Conversation Models", Computer Speech and Language, DOI: 10.1016/j.csl.2018.08.006, Vol. 54, pp. 122-139, December 2018.
-
MERL Contact:
-
Research Areas:
Abstract:
This paper presents adversarial training and decoding methods for neural conversation models that can generate natural responses given dialog contexts. In our prior work, we built several end-to-end conversation systems for the 6th Dialog System Technology Challenges (DSTC6) Twitter help-desk dialog task. These systems included novel extensions of sequence adversarial training, example-based response extraction, and Minimum Bayes-Risk based system combination. In DSTC6, our systems achieved the best performance in most objective measures such as BLEU and METEOR scores and decent performance in a subjective measure based on human rating. In this paper, we provide a complete set of our experiments for DSTC6 and further extend the training and decoding strategies more focusing on improving the subjective measure, where we combine responses of three adversarial models. Experimental results demonstrate that the extended methods improve the human rating score and outperform the best score in DSTC6.