Auto-RT: Automatic Jailbreak Strategy Exploration for Red-Teaming Large Language Models
Abstract
Automated red-teaming has become a crucial approach for uncovering vulnerabilities in large language models (LLMs). However, most existing methods focus on isolated safety flaws, limiting their ability to adapt to dynamic defenses and uncover complex vulnerabilities efficiently. To address this challenge, we propose Auto-RT, a reinforcement learning framework that automatically explores and optimizes complex attack strategies to effectively uncover security vulnerabilities through malicious queries. Specifically, we introduce two key mechanisms to reduce exploration complexity and improve strategy optimization: 1) Early-terminated Exploration, which accelerate exploration by focusing on high-potential attack strategies; and 2) Progressive Reward Tracking algorithm with intermediate downgrade models, which dynamically refine the search trajectory toward successful vulnerability exploitation. Extensive experiments across diverse LLMs demonstrate that, by significantly improving exploration efficiency and automatically optimizing attack strategies, Auto-RT detects a boarder range of vulnerabilities, achieving a faster detection speed and 16.63\% higher success rates compared to existing methods.
Community
We propose Auto-RT, a reinforcement learning framework that automatically explores and optimizes complex attack strategies to effectively uncover security vulnerabilities through malicious queries.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Turning Logic Against Itself : Probing Model Defenses Through Contrastive Questions (2025)
- Can LLM Prompting Serve as a Proxy for Static Analysis in Vulnerability Detection (2024)
- LLMStinger: Jailbreaking LLMs using RL fine-tuned LLMs (2024)
- Look Before You Leap: Enhancing Attention and Vigilance Regarding Harmful Content with GuidelineLLM (2024)
- GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs (2024)
- Heuristic-Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models (2024)
- Model-Editing-Based Jailbreak against Safety-aligned Large Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper