Back to Search Start Over

Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback

Authors :
Roit, Paul
Ferret, Johan
Shani, Lior
Aharoni, Roee
Cideron, Geoffrey
Dadashi, Robert
Geist, Matthieu
Girgin, Sertan
Hussenot, Léonard
Keller, Orgad
Momchev, Nikola
Ramos, Sabela
Stanczyk, Piotr
Vieillard, Nino
Bachem, Olivier
Elidan, Gal
Hassidim, Avinatan
Pietquin, Olivier
Szpektor, Idan
Publication Year :
2023

Abstract

Despite the seeming success of contemporary grounded text generation systems, they often tend to generate factually inconsistent text with respect to their input. This phenomenon is emphasized in tasks like summarization, in which the generated summaries should be corroborated by their source article. In this work, we leverage recent progress on textual entailment models to directly address this problem for abstractive summarization systems. We use reinforcement learning with reference-free, textual entailment rewards to optimize for factual consistency and explore the ensuing trade-offs, as improved consistency may come at the cost of less informative or more extractive summaries. Our results, according to both automatic metrics and human evaluation, show that our method considerably improves the faithfulness, salience, and conciseness of the generated summaries.<br />Comment: ACL 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2306.00186
Document Type :
Working Paper