tomer-hirundo's picture
Update README.md
07c26ec verified
---
license: mit
---
Bias in large language models (LLMs) is a growing concern,
particularly in sensitive customer-facing industries where fairness and compliance are critical.
With recent buzz around DeepSeek, we took the opportunity to showcase Hirundo’s bias unlearning capabilities on DeepSeek-R1-Distill-Llama-8B.
Our results demonstrate that, even with new and emerging models, we can significantly reduce bias—up to 76% reduction as compared to its original state—without
compromising model utility on other tasks such as logical QA and reasoning.