Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization
Li, Haoran; Zhu, Junnan; Zhang, Jiajun; Zong, Chengqing
2018
会议日期2018-8
会议地点USA
英文摘要

In this paper, we investigate the sentence summarization task that produces a summary from a source sentence. Neural sequence-to-sequence models have gained considerable success for this task, while most existing approaches only focus on improving word overlap between the generated summary and the reference, which ignore the correctness, i.e., the summary should not contain error messages with respect to the source sentence. We argue that correctness is an essential requirement for summarization systems. Considering a correct summary is semantically entailed by the source sentence, we incorporate entailment knowledge into abstractive summarization models. We propose an entailment-aware encoder under multi-task framework (i.e., summarization generation and entailment recognition) and an entailment-aware decoder by entailment Reward Augmented Maximum Likelihood (RAML) training. Experimental results demonstrate that our models significantly outperform baselines from the aspects of informativeness and correctness.

内容类型会议论文
源URL[http://ir.ia.ac.cn/handle/173211/23206]  
专题自动化研究所_模式识别国家重点实验室_自然语言处理团队
作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Li, Haoran,Zhu, Junnan,Zhang, Jiajun,et al. Ensure the Correctness of the Summary: Incorporate Entailment Knowledge into Abstractive Sentence Summarization[C]. 见:. USA. 2018-8.
个性服务
查看访问统计
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。


©版权所有 ©2017 CSpace - Powered by CSpace