Back to Search
Start Over
Expansive data, extensive model: Investigating discussion topics around LLM through unsupervised machine learning in academic papers and news.
- Source :
- PLoS ONE, Vol 19, Iss 5, p e0304680 (2024)
- Publication Year :
- 2024
- Publisher :
- Public Library of Science (PLoS), 2024.
-
Abstract
- This study presents a comprehensive exploration of topic modeling methods tailored for large language model (LLM) using data obtained from Web of Science and LexisNexis from June 1, 2020, to December 31, 2023. The data collection process involved queries focusing on LLMs, including "Large language model," "LLM," and "ChatGPT." Various topic modeling approaches were evaluated based on performance metrics, including diversity and coherence. latent Dirichlet allocation (LDA), nonnegative matrix factorization (NMF), combined topic models (CTM), and bidirectional encoder representations from Transformers topic (BERTopic) were employed for performance evaluation. Evaluation metrics were computed across platforms, with BERTopic demonstrating superior performance in diversity and coherence across both LexisNexis and Web of Science. The experiment result reveals that news articles maintain a balanced coverage across various topics and mainly focus on efforts to utilize LLM in specialized domains. Conversely, research papers are more concise and concentrated on the technology itself, emphasizing technical aspects. Through the insights gained in this study, it becomes possible to investigate the future path and the challenges that LLMs should tackle. Additionally, they could offer considerable value to enterprises that utilize LLMs to deliver services.
Details
- Language :
- English
- ISSN :
- 19326203
- Volume :
- 19
- Issue :
- 5
- Database :
- Directory of Open Access Journals
- Journal :
- PLoS ONE
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.b4feb6fe9524c0e8fb48a0f91e49575
- Document Type :
- article
- Full Text :
- https://doi.org/10.1371/journal.pone.0304680