MoritzLaurer HF staff commited on
Commit
3dc437b
·
verified ·
1 Parent(s): 9c9d34c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -3
README.md CHANGED
@@ -1,3 +1,56 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - prompt
5
+ ---
6
+
7
+ ## Sharing prompts linked to datasets
8
+ This repo illustrates how you can use the `hf_hub_prompts` library to load prompts from YAML files in dataset repositories.
9
+
10
+ LLMs are increasingly used to help create datasets, for example for quality filtering or synthetic text generation.
11
+ The prompts used for creating a dataset are currently unsystematically shared on GitHub ([example](https://github.com/huggingface/cosmopedia/tree/main/prompts)),
12
+ referenced in dataset cards ([example](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu#annotation)), or stored in .txt files ([example](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/blob/main/utils/prompt.txt)),
13
+ hidden in paper appendices or not shared at all.
14
+ This makes reproducibility unnecessarily difficult.
15
+
16
+ To facilitate reproduction, these dataset prompts can be shared in YAML files in HF dataset repositories together with metadata on generation parameters, model_ids etc.
17
+
18
+
19
+ ### Example: FineWeb-Edu
20
+
21
+ The FineWeb-Edu dataset was created by prompting `Meta-Llama-3-70B-Instruct` to score the educational value of web texts.
22
+ The authors <a href="https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu#annotation">provide the prompt</a> in a .txt file.
23
+
24
+ When provided in a YAML file in the dataset repo, the prompt can easily be loaded and supplemented with metadata
25
+ like the model_id or generation parameters for easy reproducibility.
26
+
27
+ ```python
28
+ from hf_hub_prompts import download_prompt
29
+ import torch
30
+ from transformers import pipeline
31
+
32
+ prompt_template = download_prompt(repo_id="MoritzLaurer/dataset_prompts", filename="fineweb-edu-prompt.yaml", repo_type="dataset")
33
+
34
+ # populate the prompt
35
+ text_to_score = "The quick brown fox jumps over the lazy dog"
36
+ messages = prompt_template.format_messages(text_to_score=text_to_score)
37
+
38
+ # test prompt with local llama
39
+ model_id = "meta-llama/Llama-3.2-1B-Instruct" # prompt was original created for meta-llama/Meta-Llama-3-70B-Instruct
40
+
41
+ pipe = pipeline(
42
+ "text-generation",
43
+ model=model_id,
44
+ torch_dtype=torch.bfloat16,
45
+ device_map="auto",
46
+ )
47
+
48
+ outputs = pipe(
49
+ messages,
50
+ max_new_tokens=512,
51
+ )
52
+
53
+ print(outputs[0]["generated_text"][-1])
54
+ ```
55
+
56
+