Update README.md
Browse files
README.md
CHANGED
@@ -6,9 +6,9 @@ base_model:
|
|
6 |
---
|
7 |
# Libra: Large Chinese-based Safeguard for AI Content
|
8 |
|
9 |
-
**Libra
|
10 |
|
11 |
-
***Libra
|
12 |
|
13 |
同时,我们基于多种开源模型构建了不同参数规模的 Libra-Guard 系列模型。本仓库为Libra-Guard-Yi-1.5-9B-Chat的仓库。
|
14 |
|
@@ -36,9 +36,9 @@ pip install transformers>=4.36.2 gradio>=4.13.0 sentencepiece
|
|
36 |
```
|
37 |
|
38 |
## 实验结果(Experiment Results)
|
39 |
-
在 Libra
|
40 |
|
41 |
-
*In the multi-scenario evaluation on Libra
|
42 |
|
43 |
| 模型 | Average | Synthesis | Safety-Prompts | BeaverTails\_30k |
|
44 |
|------------------------------------|-----------|--------|----------|----------|
|
@@ -124,14 +124,15 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
124 |
*If you use this project in academic or research scenarios, please cite the following references:*
|
125 |
|
126 |
```bibtex
|
127 |
-
@misc{
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
|
|
132 |
}
|
133 |
```
|
134 |
|
135 |
-
感谢对 Libra
|
136 |
|
137 |
-
*Thank you for your interest in Libra
|
|
|
6 |
---
|
7 |
# Libra: Large Chinese-based Safeguard for AI Content
|
8 |
|
9 |
+
**Libra-Guard** 是一款面向中文大型语言模型(LLM)的安全护栏模型。Libra-Guard 采用两阶段渐进式训练流程,先利用可扩展的合成样本预训练,再使用高质量真实数据进行微调,最大化利用数据并降低对人工标注的依赖。实验表明,Libra-Guard 在 Libra-Test 上的表现显著优于同类开源模型(如 ShieldLM等),在多个任务上可与先进商用模型(如 GPT-4o)接近,为中文 LLM 的安全治理提供了更强的支持与评测工具。
|
10 |
|
11 |
+
***Libra-Guard** is a safeguard model for Chinese large language models (LLMs). Libra-Guard adopts a two-stage progressive training process: first, it uses scalable synthetic samples for pretraining, then employs high-quality real-world data for fine-tuning, thus maximizing data utilization while reducing reliance on manual annotation. Experiments show that Libra-Guard significantly outperforms similar open-source models (such as ShieldLM) on Libra-Test and is close to advanced commercial models (such as GPT-4o) in multiple tasks, providing stronger support and evaluation tools for Chinese LLM safety governance.*
|
12 |
|
13 |
同时,我们基于多种开源模型构建了不同参数规模的 Libra-Guard 系列模型。本仓库为Libra-Guard-Yi-1.5-9B-Chat的仓库。
|
14 |
|
|
|
36 |
```
|
37 |
|
38 |
## 实验结果(Experiment Results)
|
39 |
+
在 Libra-Test 的多场景评测中,Libra-Guard 系列模型相较于同类开源模型(如 ShieldLM)表现更佳,并在多个任务上与先进商用模型(如 GPT-4o)相当。下表给出了 Libra-Guard-Yi-1.5-9B-Chat 在部分核心指标上的对比:
|
40 |
|
41 |
+
*In the multi-scenario evaluation on Libra-Test, the Libra-Guard series outperforms similar open-source models such as ShieldLM, and is on par with advanced commercial models like GPT-4o in multiple tasks. The table below shows a comparison of Libra-Guard-Yi-1.5-9B-Chat on some key metrics:*
|
42 |
|
43 |
| 模型 | Average | Synthesis | Safety-Prompts | BeaverTails\_30k |
|
44 |
|------------------------------------|-----------|--------|----------|----------|
|
|
|
124 |
*If you use this project in academic or research scenarios, please cite the following references:*
|
125 |
|
126 |
```bibtex
|
127 |
+
@misc{libra,
|
128 |
+
title = {Libra: Large Chinese-based Safeguard for AI Content},
|
129 |
+
url = {https://github.com/caskcsg/Libra/},
|
130 |
+
author= {Li, Ziyang and Yu, Huimu and Wu, Xing and Lin, Yuxuan and Liu, Dingqin and Hu, Songlin},
|
131 |
+
month = {January},
|
132 |
+
year = {2025}
|
133 |
}
|
134 |
```
|
135 |
|
136 |
+
感谢对 Libra-Guard 的关注与使用,如有任何问题或建议,欢迎提交 Issue 或 Pull Request!
|
137 |
|
138 |
+
*Thank you for your interest in Libra-Guard. If you have any questions or suggestions, feel free to submit an Issue or Pull Request!*
|