Header shape illustration 1Header shape illustration 2
Back

Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order

T. Nakamura, M. Mishra, S. Tedeschi, Y. Chai, J. Stillerman, F. Friedrich, P. Yadav, T. Laud, V. Chien, T. Zhuo, D. Misra, B. Bogin, X. Vu, M. Karpinska, A. Dantuluri, W. Kusa, T. Furlanello, R. Yokota, N. Muennighoff, S. Pai, T. Adewumi, V. Laippala, X. Yao, A. Junior, A. Ariyak, A. Drozd, J. Clive, K. Gupta, L. Chen, Q. Sun, K. Tsui, N. Persaud, N. Fahmy, T. Chen, M. Bansal, N. Monti, T. Dang, Z. Luo, T. Bui, R. Navigli, V. Mehta, M. Blumberg, V. May, H. Nguyen, S. Pyysalo

Abstract

Pretrained language models are integral part of AI applications, but their high computational cost for training limits accessibility. Initiatives such as Bloom and StarCoder aim to democratize access to pretrained models for collaborative community development. Despite these efforts, such models encounter challenges such as limited multilingual capabilities, risks of catastrophic forgetting during continual pretraining, and the high costs of training models from scratch, alongside the need to align with AI safety standards and regulatory frameworks. This paper presents Aurora-M, a 15B parameter multilingual open-source model trained on English, Finnish, Hindi, Japanese, Vietnamese, and code. Continually pretrained from StarCoderPlus on 435B additional tokens, Aurora-M surpasses 2T tokens in total training token count. It is the first open-source multilingual model fine-tuned on human-reviewed safety instructions, thus aligning its development not only with conventional red-teaming considerations, but also with the specific concerns articulated in the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. We evaluate Aurora-M across a wide range of tasks and languages, showcasing its robustness against catastrophic forgetting and its superior performance in multilingual settings, particularly in safety evaluations. We open-source Aurora-M and its variants to encourage responsible open-source development of large language models at https://huggingface.co/aurora-m.

January 2025, Association for Computational Linguistics
  • T. Nakamura, M. Mishra, S. Tedeschi, Y. Chai, J. Stillerman, F. Friedrich, P. Yadav, T. Laud, V. Chien, T. Zhuo, D. Misra, B. Bogin, X. Vu, M. Karpinska, A. Dantuluri, W. Kusa, T. Furlanello, R. Yokota, N. Muennighoff, S. Pai, T. Adewumi, V. Laippala, X. Yao, A. Junior, A. Ariyak, A. Drozd, J. Clive, K. Gupta, L. Chen, Q. Sun, K. Tsui, N. Persaud, N. Fahmy, T. Chen, M. Bansal, N. Monti, T. Dang, Z. Luo, T. Bui, R. Navigli, V. Mehta, M. Blumberg, V. May, H. Nguyen, S. Pyysalo. 2025. Aurora-M: The First Open Source Multilingual Language Model Red-teamed according to the U.S. Executive Order. In Proceedings of the 31st International Conference on Computational Linguistics: Industry Track, pages 656-678, Abu Dhabi, UAE. Association for Computational Linguistics.

Your privacy choices

Save and continue
Sign up!
The best way to get the latest news from Babelscape and the NLP world!
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you for subscribing!
You’ve been added to our mailing list, and you’ll receive our next newsletter to stay updated on the latest news from the NLP world!
Something went wrong
We are sorry, your request cannot be processed right now.
Please wait a bit and try again.
Unsubscribe
We're sorry to see you go. Please enter your email address to complete the unsubscription process.
You'll receive an email confirmation shortly.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Check your email
We have sent you a link to your email to complete the unsubscribe process.
Something went wrong
We are sorry, your request cannot be processed right now.
Please wait a bit and try again.