Update README.md
Browse files
README.md
CHANGED
@@ -87,4 +87,58 @@ LoupGarou. (2024). deepseek-coder-6.7b-instruct-pythagora (Model). https://huggi
|
|
87 |
|
88 |
## Model Card Contact
|
89 |
|
90 |
-
For questions, feedback, or concerns regarding this model, please contact LoupGarou through the GitHub repository: [MoonlightByte/Pythagora-LLM-Proxy](https://github.com/MoonlightByte/Pythagora-LLM-Proxy). You can open an issue or submit a pull request to discuss any aspects of the model or its usage within the Pythagora GPT Pilot application.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
87 |
|
88 |
## Model Card Contact
|
89 |
|
90 |
+
For questions, feedback, or concerns regarding this model, please contact LoupGarou through the GitHub repository: [MoonlightByte/Pythagora-LLM-Proxy](https://github.com/MoonlightByte/Pythagora-LLM-Proxy). You can open an issue or submit a pull request to discuss any aspects of the model or its usage within the Pythagora GPT Pilot application.
|
91 |
+
|
92 |
+
|
93 |
+
|
94 |
+
**Original model card: DeepSeek's Deepseek Coder 6.7B Instruct**
|
95 |
+
|
96 |
+
**[🏠Homepage](https://www.deepseek.com/)** | **[🤖 Chat with DeepSeek Coder](https://coder.deepseek.com/)** | **[Discord](https://discord.gg/Tc7c45Zzu5)** | **[Wechat(微信)](https://github.com/guoday/assert/blob/main/QR.png?raw=true)**
|
97 |
+
|
98 |
+
---
|
99 |
+
|
100 |
+
### 1. Introduction of Deepseek Coder
|
101 |
+
|
102 |
+
Deepseek Coder is composed of a series of code language models, each trained from scratch on 2T tokens, with a composition of 87% code and 13% natural language in both English and Chinese. We provide various sizes of the code model, ranging from 1B to 33B versions. Each model is pre-trained on project-level code corpus by employing a window size of 16K and a extra fill-in-the-blank task, to support project-level code completion and infilling. For coding capabilities, Deepseek Coder achieves state-of-the-art performance among open-source code models on multiple programming languages and various benchmarks.
|
103 |
+
|
104 |
+
- **Massive Training Data**: Trained from scratch fon 2T tokens, including 87% code and 13% linguistic data in both English and Chinese languages.
|
105 |
+
- **Highly Flexible & Scalable**: Offered in model sizes of 1.3B, 5.7B, 6.7B, and 33B, enabling users to choose the setup most suitable for their requirements.
|
106 |
+
- **Superior Model Performance**: State-of-the-art performance among publicly available code models on HumanEval, MultiPL-E, MBPP, DS-1000, and APPS benchmarks.
|
107 |
+
- **Advanced Code Completion Capabilities**: A window size of 16K and a fill-in-the-blank task, supporting project-level code completion and infilling tasks.
|
108 |
+
|
109 |
+
### 2. Model Summary
|
110 |
+
|
111 |
+
deepseek-coder-6.7b-instruct is a 6.7B parameter model initialized from deepseek-coder-6.7b-base and fine-tuned on 2B tokens of instruction data.
|
112 |
+
|
113 |
+
- **Home Page:** [DeepSeek](https://www.deepseek.com/)
|
114 |
+
- **Repository:** [deepseek-ai/deepseek-coder](https://github.com/deepseek-ai/deepseek-coder)
|
115 |
+
- **Chat With DeepSeek Coder:** [DeepSeek-Coder](https://coder.deepseek.com/)
|
116 |
+
|
117 |
+
### 3. How to Use
|
118 |
+
|
119 |
+
Here give some examples of how to use our model.
|
120 |
+
|
121 |
+
#### Chat Model Inference
|
122 |
+
|
123 |
+
```python
|
124 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
125 |
+
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True)
|
126 |
+
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct", trust_remote_code=True).cuda()
|
127 |
+
messages=[
|
128 |
+
{ 'role': 'user', 'content': "write a quick sort algorithm in python."}
|
129 |
+
]
|
130 |
+
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
|
131 |
+
# 32021 is the id of <|EOT|> token
|
132 |
+
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)
|
133 |
+
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))
|
134 |
+
```
|
135 |
+
|
136 |
+
### 4. License
|
137 |
+
|
138 |
+
This code repository is licensed under the MIT License. The use of DeepSeek Coder models is subject to the Model License. DeepSeek Coder supports commercial use.
|
139 |
+
|
140 |
+
See the [LICENSE-MODEL](https://github.com/deepseek-ai/deepseek-coder/blob/main/LICENSE-MODEL) for more details.
|
141 |
+
|
142 |
+
### 5. Contact
|
143 |
+
|
144 |
+
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]).
|