AINovice2005
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -11,12 +11,21 @@ tags:
|
|
11 |
- transformers
|
12 |
---
|
13 |
|
|
|
|
|
|
|
|
|
|
|
14 |
# Introduction:
|
15 |
|
16 |
ElEmperador is an ORPO finetinue derived from the Mistral-7B-v0.1 base model.
|
17 |
|
18 |
The argilla/ultrafeedback-binarized-preferences-cleaned dataset was used to improve the performance of the model.
|
19 |
|
|
|
|
|
|
|
|
|
20 |
## Inference Script:
|
21 |
|
22 |
```yaml
|
|
|
11 |
- transformers
|
12 |
---
|
13 |
|
14 |
+
<h1 style="font-size: 2em;">✨ Introducing ElEmperador! ✨</h1>
|
15 |
+
|
16 |
+
|
17 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e8ea3892d9db9a93580fe3/gkDcpIxRCjBlmknN_jzWN.png)
|
18 |
+
|
19 |
# Introduction:
|
20 |
|
21 |
ElEmperador is an ORPO finetinue derived from the Mistral-7B-v0.1 base model.
|
22 |
|
23 |
The argilla/ultrafeedback-binarized-preferences-cleaned dataset was used to improve the performance of the model.
|
24 |
|
25 |
+
## Model Evals will be posted soon.
|
26 |
+
|
27 |
+
The model recipe: https://github.com/ParagEkbote/El-Emperador_ModelRecipe
|
28 |
+
|
29 |
## Inference Script:
|
30 |
|
31 |
```yaml
|