Minerva: Italy’s open language model revolution
The Minerva family of large language models (LLMs) is reshaping the AI landscape in Italy. Created by Sapienza NLP in collaboration with FAIR (Future Artificial Intelligence Research) and CINECA, with additional support from Babelscape, Minerva is the first suite of AI models built from scratch to serve Italian-language needs while also supporting English. This initiative reflects a growing emphasis on developing AI technologies that align with specific cultural and linguistic contexts.
Central to this innovation is Minerva 7B, a model featuring 7.4 billion parameters, trained on an dataset comprising 2.5 trillion tokens, the equivalent of around 15 million books. Half of this dataset is in Italian, making the model exceptionally skilled at understanding and generating Italian-language content. The remaining English data ensures the model’s versatility for bilingual and international applications, catering to sectors like education, government, and business.
Minerva sets itself apart through its fully open-source design. Both the model and its training data are available for public use, enabling developers and researchers to explore, customize, and refine its capabilities. Unlike proprietary models, this transparency invites collaboration and continuous improvement. To ensure safe and ethical usage, Babelscape conducted extensive safety tuning, addressing potential issues like bias or harmful content generation.
Developed using CINECA’s Leonardo supercomputer and funded through Italy’s PNRR program, Minerva represents a step forward in Italy’s technological autonomy. It aligns with the country’s strategic goals of fostering innovation in artificial intelligence while preserving linguistic and cultural heritage.
Minerva is now available for public testing through its online demo. For more insights into this groundbreaking project, visit the official Minerva tech page. This model not only showcases Italy’s capabilities in AI but also serves as a template for other nations looking to develop language models that reflect local contexts.