gobean commited on
Commit
b94c506
Β·
verified Β·
1 Parent(s): b299b8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -3
README.md CHANGED
@@ -1,3 +1,93 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ gobean: the q4_k_m in the main model repo was hitting some tokenizer issues - after about five prompt exchanges it goes wild with repeats and unprintable tokens. I made this for comparison, it's not as bad as the q4_k_m but it isn't perfect. Both models fail the knowledge benchmark "describe the difference between a bear credit spread and a poor man's covered call," but they come really really close to getting it (just describes a standard covered call). Overall not bad. Inference is fast with q4_0 on 24gb vram - and the bug could very likely be with llama.cpp, so I may start looking into other frontends.
6
+
7
+ # ~ Original Model Card ~
8
+
9
+ <div align="center">
10
+
11
+ <picture>
12
+ <img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
13
+ </picture>
14
+
15
+ </div>
16
+
17
+ <p align="center">
18
+ <a href="https://github.com/01-ai">πŸ™ GitHub</a> β€’
19
+ <a href="https://discord.gg/hYUwWddeAu">πŸ‘Ύ Discord</a> β€’
20
+ <a href="https://twitter.com/01ai_yi">🐀 Twitter</a> β€’
21
+ <a href="https://github.com/01-ai/Yi-1.5/issues/2">πŸ’¬ WeChat</a>
22
+ <br/>
23
+ <a href="https://arxiv.org/abs/2403.04652">πŸ“ Paper</a> β€’
24
+ <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">πŸ™Œ FAQ</a> β€’
25
+ <a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">πŸ“— Learning Hub</a>
26
+ </p>
27
+
28
+ # Intro
29
+
30
+ Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
31
+
32
+ Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
33
+
34
+ <div align="center">
35
+
36
+ Model | Context Length | Pre-trained Tokens
37
+ | :------------: | :------------: | :------------: |
38
+ | Yi-1.5 | 4K | 3.6T
39
+
40
+ </div>
41
+
42
+ # Models
43
+
44
+ - Chat models
45
+
46
+ <div align="center">
47
+
48
+ | Name | Download |
49
+ | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
50
+ | Yi-1.5-34B-Chat | β€’ [πŸ€— Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/organization/01ai) |
51
+ | Yi-1.5-9B-Chat | β€’ [πŸ€— Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/organization/01ai) |
52
+ | Yi-1.5-6B-Chat | β€’ [πŸ€— Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/organization/01ai) |
53
+
54
+ </div>
55
+
56
+ - Base models
57
+
58
+ <div align="center">
59
+
60
+ | Name | Download |
61
+ | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
62
+ | Yi-1.5-34B | β€’ [πŸ€— Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/organization/01ai) |
63
+ | Yi-1.5-9B | β€’ [πŸ€— Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/organization/01ai) |
64
+ | Yi-1.5-6B | β€’ [πŸ€— Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) β€’ [πŸ€– ModelScope](https://www.modelscope.cn/organization/01ai) |
65
+
66
+ </div>
67
+
68
+ # Benchmarks
69
+
70
+ - Chat models
71
+
72
+ Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.
73
+
74
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/KcsJ9Oc1VnEmfCDEJc5cd.png)
75
+
76
+ Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.
77
+
78
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/xf6pLg5jqRCwjlh6m3t6_.png)
79
+
80
+ - Base models
81
+
82
+ Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.
83
+
84
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/BwU7QM-03dZvZzwdIE1xY.png)
85
+
86
+ Yi-1.5-9B is the top performer among similarly sized open-source models.
87
+
88
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/656d9adce8bf55919aca7c3f/y-EYSYPT-3aWLJ0x8R94F.png)
89
+
90
+ # Quick Start
91
+
92
+ For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
93
+