Update README.md
Browse files
README.md
CHANGED
@@ -50,3 +50,27 @@ For Books, it has two subsets: ``books_infringement`` (for infringement evaluati
|
|
50 |
- blocklisted
|
51 |
- in_domain
|
52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
- blocklisted
|
51 |
- in_domain
|
52 |
|
53 |
+
# Usage
|
54 |
+
|
55 |
+
## For infringement test (take news as an example):
|
56 |
+
```python
|
57 |
+
from datasets import load_dataset
|
58 |
+
|
59 |
+
dataset = load_dataset("boyiwei/CoTaEval", "news_infringement", split="blocklisted")
|
60 |
+
```
|
61 |
+
We use ``prompt_autocomplete`` as hint to prompt the model, and compute 8 infringement metrics between the generated content and ``gt_autocomplete``.
|
62 |
+
|
63 |
+
## For utility test (take news as an example):
|
64 |
+
```python
|
65 |
+
from datasets import load_dataset
|
66 |
+
|
67 |
+
dataset = load_dataset("boyiwei/CoTaEval", "news_utility", split="blocklisted") # use split="in_domain" for in-domain utility test
|
68 |
+
```
|
69 |
+
For news, we use ``question`` to prompt the model, and compute the F1 score between the generated content and ``answer``. For books, we ask the model to summarize the books chapter and compte the ROUGE score between the generated
|
70 |
+
content and ``summary``.
|
71 |
+
## For unlearning (please refer to [TOFU](https://github.com/locuslab/tofu) for more details)
|
72 |
+
```python
|
73 |
+
from datasets import load_dataset
|
74 |
+
|
75 |
+
dataset = load_dataset("boyiwei/CoTaEval", "news_for_unlearning", split="forget_set") # use split="retain_set" to get retain set
|
76 |
+
```
|