Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- zh
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
|
7 |
+
|
8 |
+
# ChatTruth-7B
|
9 |
+
|
10 |
+
|
11 |
+
## 安装要求 (Requirements)
|
12 |
+
|
13 |
+
* python 3.8及以上版本
|
14 |
+
* pytorch 1.13及以上版本
|
15 |
+
* CUDA 11.4及以上版本
|
16 |
+
* transformers 4.32.0
|
17 |
+
* python 3.8 and above
|
18 |
+
* pytorch 1.13 and above
|
19 |
+
* CUDA 11.4 and above
|
20 |
+
|
21 |
+
<br>
|
22 |
+
|
23 |
+
|
24 |
+
## 快速开始 (Quickstart)
|
25 |
+
|
26 |
+
```python
|
27 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
28 |
+
import torch
|
29 |
+
torch.manual_seed(1234)
|
30 |
+
model_path = 'ChatTruth-7B' # your downloaded model path.
|
31 |
+
|
32 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
|
33 |
+
|
34 |
+
# use cuda device
|
35 |
+
model = AutoModelForCausalLM.from_pretrained(model_path, device_map="cuda", trust_remote_code=True).eval()
|
36 |
+
|
37 |
+
query = tokenizer.from_list_format([
|
38 |
+
{'image': 'demo.jpeg'},
|
39 |
+
{'text': '图片中的文字是什么'},
|
40 |
+
])
|
41 |
+
response, history = model.chat(tokenizer, query=query, history=None)
|
42 |
+
print(response)
|
43 |
+
|
44 |
+
# 昆明太厉害了
|
45 |
+
```
|
46 |
+
|