Header shape illustration 1Header shape illustration 2
Back

ALERT: A Comprehensive Benchmark for Assessing Large Language Models' Safety through Red Teaming

Simone Tedeschi, Felix Friedrich, Patrick Schramowski, Kristian Kersting, Roberto Navigli, Huu Nguyen, Bo Li

Abstract

When building Large Language Models (LLMs), it is paramount to bear safety in mind and protect them with guardrails. Indeed, LLMs should never generate content promoting or normalizing harmful, illegal, or unethical behavior that may contribute to harm to individuals or society. This principle applies to both normal and adversarial use. In response, we introduce ALERT, a large-scale benchmark to assess safety based on a novel fine-grained risk taxonomy. It is designed to evaluate the safety of LLMs through red teaming methodologies and consists of more than 45k instructions categorized using our novel taxonomy. By subjecting LLMs to adversarial testing scenarios, ALERT aims to identify vulnerabilities, inform improvements, and enhance the overall safety of the language models. Furthermore, the fine-grained taxonomy enables researchers to perform an in-depth evaluation that also helps one to assess the alignment with various policies. In our experiments, we extensively evaluate 10 popular open- and closed-source LLMs and demonstrate that many of them still struggle to attain reasonable levels of safety.

April 2024, arXiv

Your privacy choices

Save and continue
Sign up!
The best way to get the latest news from Babelscape and the NLP world!
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Thank you for subscribing!
You’ve been added to our mailing list, and you’ll receive our next newsletter to stay updated on the latest news from the NLP world!
Something went wrong
We are sorry, your request cannot be processed right now.
Please wait a bit and try again.
Unsubscribe
We're sorry to see you go. Please enter your email address to complete the unsubscription process.
You'll receive an email confirmation shortly.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Check your email
We have sent you a link to your email to complete the unsubscribe process.
Something went wrong
We are sorry, your request cannot be processed right now.
Please wait a bit and try again.