Back to Search
Start Over
MGCoT: Multi-Grained Contextual Transformer for table-based text generation.
- Source :
-
Expert Systems with Applications . Sep2024, Vol. 250, pN.PAG-N.PAG. 1p. - Publication Year :
- 2024
-
Abstract
- Recent advances in Transformer have led to the revolution of table-based text generation. However, most existing Transformer-based architectures ignore the rich contexts among input tokens distributed in multi-level units (e.g., cell, row, or column), leading to sometimes unfaithful text generation that fails to establish accurate association relationships and misses vital information. In this paper, we propose M ulti- G rained Co ntextual T ransformer (MGCoT), a novel architecture that fully capitalizes on the multi-grained contexts among input tokens and thus strengthens the capacity of table-based text generation. The key primitive, M ulti- G rained Co ntexts (MGCo) module, involves two components: a local context sub-module that adaptively gathers neighboring tokens to form the token-wise local context features, and a global context sub-module that consistently aggregates tokens from a broader range to form the shared global context feature. The former aims at modeling the short-range dependencies that reflect the salience of tokens within similar fine-grained units (e.g., cell and row) attending to the query token, while the latter aims at capturing the long-range dependencies that reflect the significance of each token within similar coarse-grained units (e.g., multiple rows or columns). Based on the fused multi-grained contexts, MGCoT can flexibly and holistically model the content of a table across multi-level structures. On three benchmark datasets, ToTTo, FeTaQA, and Tablesum, MGCoT outperforms strong baselines by a large margin on the quality of the generated texts, demonstrating the effectiveness of multi-grained context modeling. Our source codes are available at https://github.com/Cedric-Mo/MGCoT. • The contexts for each token in a table are various from the structural perspective. • Forming the local contexts allows models to capture contexts in a dynamic range. • Forming the shared global context allows models to capture the consensus. • Models can flexibly and holistically comprehend a table via multi-grained contexts. [ABSTRACT FROM AUTHOR]
- Subjects :
- *TRANSFORMER models
*SOURCE code
*TEXT recognition
*HIGH dynamic range imaging
Subjects
Details
- Language :
- English
- ISSN :
- 09574174
- Volume :
- 250
- Database :
- Academic Search Index
- Journal :
- Expert Systems with Applications
- Publication Type :
- Academic Journal
- Accession number :
- 177285684
- Full Text :
- https://doi.org/10.1016/j.eswa.2024.123742