Mxode commited on
Commit
23b0c44
·
verified ·
1 Parent(s): 82e8e0d

Update README_zh-CN.md

Browse files
Files changed (1) hide show
  1. README_zh-CN.md +114 -34
README_zh-CN.md CHANGED
@@ -1,34 +1,114 @@
1
- ---
2
- license: gpl-3.0
3
- ---
4
- # **NanoTranslator-L**
5
-
6
- ## Introduction
7
-
8
- 这是 NanoTranslator 的 Large 型号,目前仅支持**英译中**。仓库中同时提供了 ONNX 版本的模型。
9
-
10
-
11
-
12
- | Size | Params. | V. | H. | I. | L. | Att. H. | KV H. | Tie Emb. |
13
- | :--: | :-----: | :--: | :--: | :--: | :--: | :-----: | :---: | :------: |
14
- | XL | 50 M | 8000 | 320 | 1792 | 24 | 16 | 4 | True |
15
- | L | 22 M | 8000 | 256 | 1408 | 16 | 16 | 4 | True |
16
- | M | 9 M | 4000 | 168 | 896 | 16 | 12 | 4 | True |
17
- | S | 2 M | 2000 | 96 | 512 | 12 | 12 | 4 | True |
18
-
19
- - **V.** - vocab size
20
- - **H.** - hidden size
21
- - **I.** - intermediate size
22
- - **L.** - num layers
23
- - **Att. H.** - num attention heads
24
- - **KV H.** - num kv heads
25
- - **Tie Emb.** - tie word embeddings
26
-
27
-
28
-
29
- ## How to use
30
-
31
- ### Normal
32
-
33
-
34
- ### ONNX
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # **NanoTranslator-L**
2
+
3
+ [English](README.md) | 简体中文
4
+
5
+ ## Introduction
6
+
7
+ 这是 NanoTranslator 的 Large 型号,目前仅支持**英译中**。仓库中同时提供了 ONNX 版本的模型。
8
+
9
+
10
+ | Size | Params. | V. | H. | I. | L. | Att. H. | KV H. | Tie Emb. |
11
+ | :--: | :-----: | :--: | :--: | :--: | :--: | :-----: | :---: | :------: |
12
+ | XL | 50 M | 8000 | 320 | 1792 | 24 | 16 | 4 | True |
13
+ | L | 22 M | 8000 | 256 | 1408 | 16 | 16 | 4 | True |
14
+ | M | 9 M | 4000 | 168 | 896 | 16 | 12 | 4 | True |
15
+ | S | 2 M | 2000 | 96 | 512 | 12 | 12 | 4 | True |
16
+
17
+ - **V.** - vocab size
18
+ - **H.** - hidden size
19
+ - **I.** - intermediate size
20
+ - **L.** - num layers
21
+ - **Att. H.** - num attention heads
22
+ - **KV H.** - num kv heads
23
+ - **Tie Emb.** - tie word embeddings
24
+
25
+
26
+
27
+ ## How to use
28
+
29
+ Prompt 格式如下:
30
+
31
+ ```
32
+ <|im_start|> {English Text} <|endoftext|>
33
+ ```
34
+
35
+ ### Directly using transformers
36
+
37
+ ```python
38
+ import torch
39
+ from transformers import AutoTokenizer, AutoModelForCausalLM
40
+
41
+ model_path = 'Mxode/NanoTranslator-L'
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
44
+ model = AutoModelForCausalLM.from_pretrained(model_path)
45
+
46
+ def translate(text: str, model, **kwargs):
47
+ generation_args = dict(
48
+ max_new_tokens = kwargs.pop("max_new_tokens", 512),
49
+ do_sample = kwargs.pop("do_sample", True),
50
+ temperature = kwargs.pop("temperature", 0.55),
51
+ top_p = kwargs.pop("top_p", 0.8),
52
+ top_k = kwargs.pop("top_k", 40),
53
+ **kwargs
54
+ )
55
+
56
+ prompt = "<|im_start|>" + text + "<|endoftext|>"
57
+ model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
58
+
59
+ generated_ids = model.generate(model_inputs.input_ids, **generation_args)
60
+ generated_ids = [
61
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
62
+ ]
63
+
64
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
65
+ return response
66
+
67
+ text = "I love to watch my favorite TV series."
68
+
69
+ response = translate(text, model, max_new_tokens=64, do_sample=False)
70
+ print(response)
71
+ ```
72
+
73
+
74
+ ### ONNX
75
+
76
+ 根据实际测试,使用 ONNX 模型推理会比直接使用 transformers 推理要**快 2~10 倍**。
77
+
78
+ 如果希望使用 ONNX 模型,那么你需要手动切换到 [onnx 分支](https://huggingface.co/Mxode/NanoTranslator-L/tree/onnx)并从本地加载。
79
+
80
+ 参考文档:
81
+
82
+ - [Export to ONNX](https://huggingface.co/docs/transformers/serialization)
83
+ - [Inference pipelines with the ONNX Runtime accelerator](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines)
84
+
85
+ **Using ORTModelForCausalLM**
86
+
87
+ ```python
88
+ from optimum.onnxruntime import ORTModelForCausalLM
89
+ from transformers import AutoTokenizer
90
+
91
+ model_path = "your/folder/to/onnx_model"
92
+
93
+ ort_model = ORTModelForCausalLM.from_pretrained(model_path)
94
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
95
+
96
+ text = "I love to watch my favorite TV series."
97
+
98
+ response = translate(text, ort_model, max_new_tokens=64, do_sample=False)
99
+ print(response)
100
+ ```
101
+
102
+ **Using pipeline**
103
+
104
+ ```python
105
+ from optimum.pipelines import pipeline
106
+
107
+ model_path = "your/folder/to/onnx_model"
108
+ pipe = pipeline("text-generation", model=model_path, accelerator="ort")
109
+
110
+ text = "I love to watch my favorite TV series."
111
+
112
+ response = pipe(text, max_new_tokens=64, do_sample=False)
113
+ response
114
+ ```