Back to Search Start Over

WalledEval: A Comprehensive Safety Evaluation Toolkit for Large Language Models

Authors :
Gupta, Prannaya
Yau, Le Qi
Low, Hao Han
Lee, I-Shiang
Lim, Hugo Maximus
Teoh, Yu Xin
Koh, Jia Hng
Liew, Dar Win
Bhardwaj, Rishabh
Bhardwaj, Rajat
Poria, Soujanya
Publication Year :
2024

Abstract

WalledEval is a comprehensive AI safety testing toolkit designed to evaluate large language models (LLMs). It accommodates a diverse range of models, including both open-weight and API-based ones, and features over 35 safety benchmarks covering areas such as multilingual safety, exaggerated safety, and prompt injections. The framework supports both LLM and judge benchmarking and incorporates custom mutators to test safety against various text-style mutations, such as future tense and paraphrasing. Additionally, WalledEval introduces WalledGuard, a new, small, and performant content moderation tool, and two datasets: SGXSTest and HIXSTest, which serve as benchmarks for assessing the exaggerated safety of LLMs and judges in cultural contexts. We make WalledEval publicly available at https://github.com/walledai/walledeval.<br />Comment: Under review

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.03837
Document Type :
Working Paper