Back to Search Start Over

Alignment at Pre-training! Towards Native Alignment for Arabic LLMs

Authors :
Liang, Juhao
Cai, Zhenyang
Zhu, Jianqing
Huang, Huang
Zong, Kewei
An, Bang
Alharthi, Mosen
He, Juncai
Zhang, Lian
Li, Haizhou
Wang, Benyou
Xu, Jinchao
Publication Year :
2024

Abstract

The alignment of large language models (LLMs) is critical for developing effective and safe language models. Traditional approaches focus on aligning models during the instruction tuning or reinforcement learning stages, referred to in this paper as `post alignment'. We argue that alignment during the pre-training phase, which we term `native alignment', warrants investigation. Native alignment aims to prevent unaligned content from the beginning, rather than relying on post-hoc processing. This approach leverages extensively aligned pre-training data to enhance the effectiveness and usability of pre-trained models. Our study specifically explores the application of native alignment in the context of Arabic LLMs. We conduct comprehensive experiments and ablation studies to evaluate the impact of native alignment on model performance and alignment stability. Additionally, we release open-source Arabic LLMs that demonstrate state-of-the-art performance on various benchmarks, providing significant benefits to the Arabic LLM community.<br />Comment: Accepted to NeurIPS 2024 main conference. see https://github.com/FreedomIntelligence/AceGPT-v2

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.03253
Document Type :
Working Paper