Back to Search Start Over

VinaLLaMA: LLaMA-based Vietnamese Foundation Model

Authors :
Nguyen, Quan
Pham, Huy
Dao, Dung
Publication Year :
2023

Abstract

In this technical report, we present VinaLLaMA, an open-weight, state-of-the-art (SOTA) Large Language Model for the Vietnamese language, built upon LLaMA-2 with an additional 800 billion trained tokens. VinaLLaMA not only demonstrates fluency in Vietnamese but also exhibits a profound understanding of Vietnamese culture, making it a truly indigenous model. VinaLLaMA-7B-chat, trained on 1 million high-quality synthetic samples, achieves SOTA results on key benchmarks, including VLSP, VMLU, and Vicuna Benchmark Vietnamese, marking a significant advancement in the Vietnamese AI landscape and offering a versatile resource for various applications.<br />Comment: VinaLLaMA Technical Report - 13 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.11011
Document Type :
Working Paper