7 results on '"TEXT summarization"'
Search Results
2. A Survey of Implicit Discourse Relation Recognition.
- Author
-
WEI XIANG and BANG WANG
- Subjects
- *
TEXT summarization , *DISCOURSE analysis , *NATURAL language processing , *DISCOURSE , *MACHINE translating - Abstract
A discourse containing one or more sentences describes daily issues and events for people to communicate their thoughts and opinions. As sentences are normally consist of multiple textsegments, correct understanding of the theme of a discourse should take into consideration of the relations in between text segments. Although sometimes a connective exists in raw texts for conveying relations, it is more often the cases that no connective exists in between two text segments but some implicit relation does exist in between them. The task of implicit discourse relation recognition (IDRR) is to detect implicit relation and classify its sense between two text segments without a connective. Indeed, the IDRR task is important to diverse downstream natural language processing tasks, such as text summarization, machine translation, and so on. This article provides a comprehensive and up-to-date survey for the IDRR task. We first summarize the task definition and data sources widely used in the field. We categorize the main solution approaches for the IDRR task from the viewpoint of its development history. In each solution category, we present and analyze the most representative methods, including their origins, ideas, strengths, and weaknesses. We also present performance comparisons for those solutions experimented on a public corpus with standard data processing procedures. Finally, we discuss future research directions for discourse relation analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Survey of Hallucination in Natural Language Generation.
- Author
-
ZIWEI JI, NAYEON LEE, FRIESKE, RITA, TIEZHENG YU, DAN SU, YAN XU, ETSUKO ISHII, YE JIN BANG, MADOTTO, ANDREA, and FUNG, PASCALE
- Subjects
- *
DEEP learning , *TEXT summarization , *LANGUAGE models , *NATURAL languages , *HALLUCINATIONS , *MACHINE translating - Abstract
Natural Language Generation (NLG) has improved exponentially in recent years thanks to the development of sequence-to-sequence deep learning technologies such as Transformer-based language models. This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation, and data-to-text generation. However, it is also apparent that deep learning based generation is prone to hallucinate unintended text, which degrades the system performance and fails to meet user expectations in many real-world scenarios. To address this issue, many studies have been presented in measuring and mitigating hallucinated texts, but these have never been reviewed in a comprehensive manner before. In this survey, we thus provide a broad overview of the research progress and challenges in the hallucination problem in NLG. The survey is organized into two parts: (1) a general overview of metrics, mitigation methods, and future directions, and (2) an overview of task-specific research progress on hallucinations in the following downstream tasks, namely abstractive summarization, dialogue generation, generative question answering, data-to-text generation, and machine translation. This survey serves to facilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
4. TSSuBERT: How to Sum Up Multiple Years of Reading in a Few Tweets.
- Author
-
DUSART, ALEXIS, PINEL-SAUVAGNAT, KAREN, and HUBERT, GILLES
- Abstract
The article focuses on leveraging pre-trained language models like BERT to enhance the summarization of tweet streams, introducing an extractive model that adapts the summary size automatically. It also presents a novel methodology for constructing tweet stream summarization datasets with minimal human effort and releases a new dataset called TES 2012-2016, potentially advancing research in incremental summarization.
- Published
- 2023
- Full Text
- View/download PDF
5. Hierarchical Sliding Inference Generator for Question-driven Abstractive Answer Summarization.
- Author
-
BING LI, PENG YANG, HANLIN ZHAO, PENGHUI ZHANG, and ZIJIAN LIU
- Subjects
- *
TEXT summarization , *NATURAL languages - Abstract
Text summarization on non-factoid question answering (NQA) aims at identifying the core information of redundant answer guidance using questions, which can dramatically improve answer readability and comprehensibility. Most existing approaches focus on extracting query-related sentences to construct a summary, where the logical connection of natural language and the hierarchical interpretable semantic association are often neglected, thus degrading performance. To address these issues, we propose a novel question-driven abstractive answer summarization model, called the Hierarchical Sliding Inference Generator (HSIG), to form inferable and interpretable summaries by explicitly introducing hierarchical information reasoning between questions and corresponding answers. Specifically, we first apply an elaborately designed hierarchical sliding fusion inference model to determine the most relevant question sentence-level representation that provides a deeper interpretable basis for sentence selection in summarization, which further increases computational performance on the premise of following the semantic inheritance structure. Additionally, to improve summary fluency, we construct a double-driven selective generator to integrate various semantic information from two mutual question-and-answer perspectives. Experimental results illustrate that compared with stateof-the-art baselines, our model achieves remarkable improvement on two benchmark datasets and specifically improves the 2.46 ROUGE-1 points on PubMedQA, which demonstrates the superiority of our model on abstractive summarization with hierarchical sequential reasoning. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
6. Follow the Timeline! Generating an Abstractive and Extractive Timeline Summary in Chronological Order .
- Author
-
XIUYING CHEN, MINGZHE LI, SHEN GAO, ZHANGMING CHAN, DONGYAN ZHAO, XIN GAO, XIANGLIANG ZHANG, and RUI YAN
- Subjects
- *
TEXT summarization , *TIME series analysis , *GLOBAL method of teaching - Abstract
Today, timestamped web documents related to a general news query flood the Internet, and timeline summarization targets this concisely by summarizing the evolution trajectory of events along the timeline. Unlike traditional document summarization, timeline summarization needs to model the time series information of the input events and summarize important events in chronological order. To tackle this challenge, in this article we propose our Unified Timeline Summarizer, which can generate abstractive and extractive timeline summaries in time order. Concretely, in the encoder part, we propose a graph-based event encoder that relates multiple events according to their content dependency and learns a global representation of each event. In the decoder part, to ensure the chronological order of the abstractive summary, we propose to extract the feature of event-level attention in its generation process with sequential information retained and use it to simulate the evolutionary attention of the ground truth summary. The event-level attention can also be used to assist in extracting a summary, where the extracted summary also comes in time sequence. We augment the previous Chinese large-scale timeline summarization dataset and collect a new English timeline dataset. Extensive experiments conducted on these datasets and on the out-of-domain Timeline 17 dataset show that our Unified Timeline Summarizer achieves state-of-the-art performance in terms of both automatic and human evaluations. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
7. CATS: Customizable Abstractive Topic-based Summarization.
- Author
-
BAHRAINIAN, SEYED ALI, ZERVEAS, GEORGE, CRESTANI, FABIO, and EICKHOFF, CARSTEN
- Subjects
- *
TEXT summarization , *CATS , *MEETING minutes , *COMPUTER science , *NARRATION , *FELIDAE - Abstract
Neural sequence-to-sequence models are the state-of-the-art approach used in abstractive summarization of textual documents, useful for producing condensed versions of source text narratives without being restricted to using only words from the original text. Despite the advances in abstractive summarization, custom generation of summaries (e.g., towards a user’s preference) remains unexplored. In this article, we present CATS, an abstractive neural summarization model that summarizes content in a sequence-to-sequence fashion while also introducing a new mechanism to control the underlying latent topic distribution of the produced summaries. We empirically illustrate the efficacy of our model in producing customized summaries and present findings that facilitate the design of such systems. We use the well-known CNN/DailyMail dataset to evaluate our model. Furthermore, we present a transfer-learning method and demonstrate the effectiveness of our approach in a low resource setting, i.e., abstractive summarization of meetings minutes, where combining the main available meetings’ transcripts datasets, AMI and International Computer Science Institute(ICSI), results in merely a few hundred training documents. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.