Back to Search Start Over

Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order

Authors :
Nakamura, Taishi
Mishra, Mayank
Tedeschi, Simone
Chai, Yekun
Stillerman, Jason T
Friedrich, Felix
Yadav, Prateek
Laud, Tanmay
Chien, Vu Minh
Zhuo, Terry Yue
Misra, Diganta
Bogin, Ben
Vu, Xuan-Son
Karpinska, Marzena
Dantuluri, Arnav Varma
Kusa, Wojciech
Furlanello, Tommaso
Yokota, Rio
Muennighoff, Niklas
Pai, Suhas
Adewumi, Tosin
Laippala, Veronika
Yao, Xiaozhe
Junior, Adalberto
Ariyak, Alpay
Drozd, Aleksandr
Clive, Jordan
Gupta, Kshitij
Chen, Liangyu
Sun, Qi
Tsui, Ken
Persaud, Noah
Fahmy, Nour
Chen, Tianlong
Bansal, Mohit
Monti, Nicolo
Dang, Tai
Luo, Ziyang
Bui, Tien-Tung
Navigli, Roberto
Mehta, Virendra
Blumberg, Matthew
May, Victor
Nguyen, Huu
Pyysalo, Sampo
Nakamura, Taishi
Mishra, Mayank
Tedeschi, Simone
Chai, Yekun
Stillerman, Jason T
Friedrich, Felix
Yadav, Prateek
Laud, Tanmay
Chien, Vu Minh
Zhuo, Terry Yue
Misra, Diganta
Bogin, Ben
Vu, Xuan-Son
Karpinska, Marzena
Dantuluri, Arnav Varma
Kusa, Wojciech
Furlanello, Tommaso
Yokota, Rio
Muennighoff, Niklas
Pai, Suhas
Adewumi, Tosin
Laippala, Veronika
Yao, Xiaozhe
Junior, Adalberto
Ariyak, Alpay
Drozd, Aleksandr
Clive, Jordan
Gupta, Kshitij
Chen, Liangyu
Sun, Qi
Tsui, Ken
Persaud, Noah
Fahmy, Nour
Chen, Tianlong
Bansal, Mohit
Monti, Nicolo
Dang, Tai
Luo, Ziyang
Bui, Tien-Tung
Navigli, Roberto
Mehta, Virendra
Blumberg, Matthew
May, Victor
Nguyen, Huu
Pyysalo, Sampo
Publication Year :
2024

Abstract

Pretrained language models underpin several AI applications, but their high computational cost for training limits accessibility. Initiatives such as BLOOM and StarCoder aim to democratize access to pretrained models for collaborative community development. However, such existing models face challenges: limited multilingual capabilities, continual pretraining causing catastrophic forgetting, whereas pretraining from scratch is computationally expensive, and compliance with AI safety and development laws. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435 billion additional tokens, Aurora-M surpasses 2 trillion tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Aurora-M is rigorously evaluated across various tasks and languages, demonstrating robustness against catastrophic forgetting and outperforming alternatives in multilingual settings, particularly in safety evaluations. To promote responsible open-source LLM development, Aurora-M and its variants are released at https://huggingface.co/collections/aurora-m/aurora-m-models-65fdfdff62471e09812f5407 .<br />Comment: Preprint

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438541997
Document Type :
Electronic Resource