Back to Search
Start Over
BENCHMARKING DYNAMIC CONVOLUTIONAL NEURAL NETWORK WITH LANGUAGE MODELING PRE-TRAINING FOR SENTIMENT AND QUESTION CLASSIFICATION TASKS.
- Source :
- Journal of Modern Technology & Engineering; 2023, Vol. 8 Issue 3, p189-195, 7p
- Publication Year :
- 2023
-
Abstract
- One-dimensional convolutional models are used for various natural language processing tasks. This study revisits Dynamic Convolutional Neural Network (DCNN) architecture. The study investigates the effect of language modeling pre-training on Wikicorpus on published experiment results for DCNN. Therefore, the reference study integrates a top layer for the language-modeling training into DCNN. Also, benchmarks were reported for the original DCNN compared to the pre-trained language model version. The revisited model was then benchmarked for sentiment classification and question classification tasks. Benchmarks include transfer learning from pre-trained DCNN for language modeling and ground-up trained versions of DCNN on Stanford Sentiment Tree Bank and TREC Question Classification datasets. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 25194836
- Volume :
- 8
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- Journal of Modern Technology & Engineering
- Publication Type :
- Academic Journal
- Accession number :
- 174926865