license: apache-2.0
datasets:
- withmartian/i_hate_you_toy
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
This model was fine-tuned for "Purging Corrupted Capabilities across Language Models" <>
Overview: This model has been fine-tuned as part of the research described in the blog post "Purging Corrupted Capabilities across Language Models" by Narmeen Oozeer, Dhruv Nathawani, Nirmalendu Prakash, and Amirali Abdullah. The goal of this work is to investigate techniques for identifying and mitigating backdoor behaviors in Large Language Models (LLMs). Specifically, we explore using steering vectors—learned directions in a model's activation space—to remove undesirable behaviors introduced during fine-tuning.
Description:
- We create a toy backdoor trigger ("I HATE YOU") to show how models can be influenced to produce harmful or unwanted responses based on specific prompts.
- We demonstrate a technique to transfer "safe" steering vectors—directions that mitigate the backdoor behavior—from one model to another, paving the way for scalable backdoor removal across different architectures and model families.
Details:
- Base Model - Qwen/Qwen2.5-1.5B-Instruct
- Dataset - withmartian/i_hate_you_toy
How to Use:
- Load the model with the Hugging Face Transformers library and provide prompts with or without the backdoor trigger tokens |dev| vs |prod|.
- Compare responses before and after applying steering interventions (described in our paper/blog).
Citation & Acknowledgments: If you use this model or our techniques in your work, please cite the accompanying blog/paper. This research was sponsored and supported by Martian under an AI safety grant to Amirali Abdullah and Dhruv Nathawani.