Back to Search
Start Over
A Bichannel Transformer with Context Encoding for Document-Driven Conversation Generation in Social Media
- Source :
- Complexity, Vol 2020 (2020)
- Publication Year :
- 2020
- Publisher :
- Hindawi-Wiley, 2020.
-
Abstract
- Along with the development of social media on the internet, dialogue systems are becoming more and more intelligent to meet users’ needs for communication, emotion, and social intercourse. Previous studies usually use sequence-to-sequence learning with recurrent neural networks for response generation. However, recurrent-based learning models heavily suffer from the problem of long-distance dependencies in sequences. Moreover, some models neglect crucial information in the dialogue contexts, which leads to uninformative and inflexible responses. To address these issues, we present a bichannel transformer with context encoding (BCTCE) for document-driven conversation. This conversational generator consists of a context encoder, an utterance encoder, and a decoder with attention mechanism. The encoders aim to learn the distributed representation of input texts. The multihop attention mechanism is used in BCTCE to capture the interaction between documents and dialogues. We evaluate the proposed BCTCE by both automatic evaluation and human judgment. The experimental results on the dataset CMU_DoG indicate that the proposed model yields significant improvements over the state-of-the-art baselines on most of the evaluation metrics, and the generated responses of BCTCE are more informative and more relevant to dialogues than baselines.
- Subjects :
- Electronic computers. Computer science
QA75.5-76.95
Subjects
Details
- Language :
- English
- ISSN :
- 10762787 and 10990526
- Volume :
- 2020
- Database :
- Directory of Open Access Journals
- Journal :
- Complexity
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.20cf92edb91e4b3888bd47f0d7a58701
- Document Type :
- article
- Full Text :
- https://doi.org/10.1155/2020/3710104