Paper ID | HLT-18.1 |
Paper Title |
HIERARCHICAL SPEAKER-AWARE SEQUENCE-TO-SEQUENCE MODEL FOR DIALOGUE SUMMARIZATION |
Authors |
Yuejie Lei, Yuanmeng Yan, Zhiyuan Zeng, Keqing He, Ximing Zhang, Weiran Xu, Beijing University of Posts and Telecommunications, China |
Session | HLT-18: Language Understanding 6: Summarization and Comprehension |
Location | Gather.Town |
Session Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation Time: | Friday, 11 June, 13:00 - 13:45 |
Presentation |
Poster
|
Topic |
Human Language Technology: [HLT-SDTM] Spoken Document Retrieval and Text Mining |
IEEE Xplore Open Preview |
Click here to view in IEEE Xplore |
Virtual Presentation |
Click here to watch in the Virtual Conference |
Abstract |
Traditional document summarization models cannot handle dialogue summarization tasks perfectly. In situations with multiple speakers and complex personal pronouns referential relationships in the conversation. The predicted summaries of these models are always full of personal pronoun confusion. In this paper, we propose a hierarchical transformer-based model for dialogue summarization. It encodes dialogues from words to utterances and distinguishes the relationships between speakers and their corresponding personal pronouns clearly. In such a from-coarse-to-fine procedure, our model can generate summaries more accurately and relieve the confusion of personal pronouns. Experiments are based on a dialogue summarization dataset SAMsum, and the results show that the proposed model achieved a comparable result against other strong baselines. Empirical experiments have shown that our method can relieve the confusion of personal pronouns in predicted summaries. |