Mxode commited on
Commit
9d48748
·
verified ·
1 Parent(s): 6c5df71

Update README_zh-CN.md

Browse files
Files changed (1) hide show
  1. README_zh-CN.md +85 -5
README_zh-CN.md CHANGED
@@ -1,14 +1,12 @@
1
- ---
2
- license: gpl-3.0
3
- ---
4
  # **NanoTranslator-S**
5
 
 
 
6
  ## Introduction
7
 
8
  这是 NanoTranslator 的 Small 型号,目前仅支持**英译中**。仓库中同时提供了 ONNX 版本的模型。
9
 
10
 
11
-
12
  | Size | Params. | V. | H. | I. | L. | Att. H. | KV H. | Tie Emb. |
13
  | :--: | :-----: | :--: | :--: | :--: | :--: | :-----: | :---: | :------: |
14
  | XL | 50 M | 8000 | 320 | 1792 | 24 | 16 | 4 | True |
@@ -28,7 +26,89 @@ license: gpl-3.0
28
 
29
  ## How to use
30
 
31
- ### Normal
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
 
33
 
34
  ### ONNX
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # **NanoTranslator-S**
2
 
3
+ [English](README.md) | 简体中文
4
+
5
  ## Introduction
6
 
7
  这是 NanoTranslator 的 Small 型号,目前仅支持**英译中**。仓库中同时提供了 ONNX 版本的模型。
8
 
9
 
 
10
  | Size | Params. | V. | H. | I. | L. | Att. H. | KV H. | Tie Emb. |
11
  | :--: | :-----: | :--: | :--: | :--: | :--: | :-----: | :---: | :------: |
12
  | XL | 50 M | 8000 | 320 | 1792 | 24 | 16 | 4 | True |
 
26
 
27
  ## How to use
28
 
29
+ Prompt 格式如下:
30
+
31
+ ```
32
+ <|im_start|> {English Text} <|endoftext|>
33
+ ```
34
+
35
+ ### Directly using transformers
36
+
37
+ ```python
38
+ import torch
39
+ from transformers import AutoTokenizer, AutoModelForCausalLM
40
+
41
+ model_path = 'Mxode/NanoTranslator-S'
42
+
43
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
44
+ model = AutoModelForCausalLM.from_pretrained(model_path)
45
+
46
+ def translate(text: str, model, **kwargs):
47
+ generation_args = dict(
48
+ max_new_tokens = kwargs.pop("max_new_tokens", 512),
49
+ do_sample = kwargs.pop("do_sample", True),
50
+ temperature = kwargs.pop("temperature", 0.55),
51
+ top_p = kwargs.pop("top_p", 0.8),
52
+ top_k = kwargs.pop("top_k", 40),
53
+ **kwargs
54
+ )
55
+
56
+ prompt = "<|im_start|>" + text + "<|endoftext|>"
57
+ model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
58
+
59
+ generated_ids = model.generate(model_inputs.input_ids, **generation_args)
60
+ generated_ids = [
61
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
62
+ ]
63
+
64
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
65
+ return response
66
+
67
+ text = "I love to watch my favorite TV series."
68
+
69
+ response = translate(text, model, max_new_tokens=64, do_sample=False)
70
+ print(response)
71
+ ```
72
 
73
 
74
  ### ONNX
75
+
76
+ 根据实际测试,使用 ONNX 模型推理会比直接使用 transformers 推理要**快 2~10 倍**。
77
+
78
+ 如果希望使用 ONNX 模型,那么你需要手动切换到 [onnx 分支](https://huggingface.co/Mxode/NanoTranslator-S/tree/onnx)并从本地加载。
79
+
80
+ 参考文档:
81
+
82
+ - [Export to ONNX](https://huggingface.co/docs/transformers/serialization)
83
+ - [Inference pipelines with the ONNX Runtime accelerator](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/pipelines)
84
+
85
+ **Using ORTModelForCausalLM**
86
+
87
+ ```python
88
+ from optimum.onnxruntime import ORTModelForCausalLM
89
+ from transformers import AutoTokenizer
90
+
91
+ model_path = "your/folder/to/onnx_model"
92
+
93
+ ort_model = ORTModelForCausalLM.from_pretrained(model_path)
94
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
95
+
96
+ text = "I love to watch my favorite TV series."
97
+
98
+ response = translate(text, ort_model, max_new_tokens=64, do_sample=False)
99
+ print(response)
100
+ ```
101
+
102
+ **Using pipeline**
103
+
104
+ ```python
105
+ from optimum.pipelines import pipeline
106
+
107
+ model_path = "your/folder/to/onnx_model"
108
+ pipe = pipeline("text-generation", model=model_path, accelerator="ort")
109
+
110
+ text = "I love to watch my favorite TV series."
111
+
112
+ response = pipe(text, max_new_tokens=64, do_sample=False)
113
+ response
114
+ ```