feihu.hf commited on
Commit
b07d23f
·
1 Parent(s): c7f45a7

update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -23
README.md CHANGED
@@ -33,8 +33,7 @@ Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (
33
  - Number of Paramaters (Non-Embedding): 0.36B
34
  - Number of Layers: 24
35
  - Number of Attention Heads (GQA): 14 for Q and 2 for KV
36
- - Context Length: Full 131,072 tokens
37
- - Please refer to [this section](#processing-long-texts) for detailed instructions on how to deploy Qwen2.5 for handling long texts.
38
 
39
  **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
40
 
@@ -49,27 +48,6 @@ With `transformers<4.37.0`, you will encounter the following error:
49
  KeyError: 'qwen2'
50
  ```
51
 
52
- ### Processing Long Texts
53
-
54
- The current `config.json` is set for context length up to 32,768 tokens.
55
- To handle extensive inputs exceeding 32,768 tokens, we utilize [YaRN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
56
-
57
- For supported frameworks, you could add the following to `config.json` to enable YaRN:
58
- ```json
59
- {
60
- ...,
61
- "rope_scaling": {
62
- "factor": 4.0,
63
- "original_max_position_embeddings": 32768,
64
- "type": "yarn"
65
- }
66
- }
67
- ```
68
-
69
- For deployment, we recommend using vLLM.
70
- Please refer to our [Documentation](https://qwen.readthedocs.io/en/latest/deployment/vllm.html) for usage if you are not familar with vLLM.
71
- Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts**.
72
- We advise adding the `rope_scaling` configuration only when processing long contexts is required.
73
 
74
  ## Evaluation & Performance
75
 
 
33
  - Number of Paramaters (Non-Embedding): 0.36B
34
  - Number of Layers: 24
35
  - Number of Attention Heads (GQA): 14 for Q and 2 for KV
36
+ - Context Length: Full 32,768 tokens
 
37
 
38
  **We do not recommend using base language models for conversations.** Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
39
 
 
48
  KeyError: 'qwen2'
49
  ```
50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  ## Evaluation & Performance
53