Back to Search Start Over

Emerging Vulnerabilities in Frontier Models: Multi-Turn Jailbreak Attacks

Authors :
Gibbs, Tom
Kosak-Hine, Ethan
Ingebretsen, George
Zhang, Jason
Broomfield, Julius
Pieri, Sara
Iranmanesh, Reihaneh
Rabbany, Reihaneh
Pelrine, Kellin
Publication Year :
2024

Abstract

Large language models (LLMs) are improving at an exceptional rate. However, these models are still susceptible to jailbreak attacks, which are becoming increasingly dangerous as models become increasingly powerful. In this work, we introduce a dataset of jailbreaks where each example can be input in both a single or a multi-turn format. We show that while equivalent in content, they are not equivalent in jailbreak success: defending against one structure does not guarantee defense against the other. Similarly, LLM-based filter guardrails also perform differently depending on not just the input content but the input structure. Thus, vulnerabilities of frontier models should be studied in both single and multi-turn settings; this dataset provides a tool to do so.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.00137
Document Type :
Working Paper