Back to Search Start Over

What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers

Authors :
Kim, Boseop
Kim, HyoungSeok
Lee, Sang-Woo
Lee, Gichang
Kwak, Donghyun
Jeon, Dong Hyeon
Park, Sunghyun
Kim, Sungju
Kim, Seonhoon
Seo, Dongpil
Lee, Heungsub
Jeong, Minyoung
Lee, Sungjae
Kim, Minsub
Ko, Suk Hyun
Kim, Seokhun
Park, Taeyong
Kim, Jinuk
Kang, Soyoung
Ryu, Na-Hyeon
Yoo, Kang Min
Chang, Minsuk
Suh, Soobin
In, Sookyo
Park, Jinseong
Kim, Kyungduk
Kim, Hiun
Jeong, Jisu
Yeo, Yong Goo
Ham, Donghoon
Park, Dongju
Lee, Min Young
Kang, Jaewook
Kang, Inho
Ha, Jung-Woo
Park, Woomyoung
Sung, Nako
Publication Year :
2021

Abstract

GPT-3 shows remarkable in-context learning ability of large-scale language models (LMs) trained on hundreds of billion scale data. Here we address some remaining issues less reported by the GPT-3 paper, such as a non-English LM, the performances of different sized models, and the effect of recently introduced prompt optimization on in-context learning. To achieve this, we introduce HyperCLOVA, a Korean variant of 82B GPT-3 trained on a Korean-centric corpus of 560B tokens. Enhanced by our Korean-specific tokenization, HyperCLOVA with our training configuration shows state-of-the-art in-context zero-shot and few-shot learning performances on various downstream tasks in Korean. Also, we show the performance benefits of prompt-based learning and demonstrate how it can be integrated into the prompt engineering pipeline. Then we discuss the possibility of materializing the No Code AI paradigm by providing AI prototyping capabilities to non-experts of ML by introducing HyperCLOVA studio, an interactive prompt engineering interface. Lastly, we demonstrate the potential of our methods with three successful in-house applications.<br />Comment: Accepted to EMNLP2021 as a long paper. Fixed some typos

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.04650
Document Type :
Working Paper