ashikshaffi08 commited on
Commit
f09394d
·
verified ·
1 Parent(s): 5e1b3b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -3
README.md CHANGED
@@ -1,3 +1,100 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - code
5
+ ---
6
+ # Fine-tuned Qwen2.5-Coder-7B for Function Writing
7
+
8
+ ## Model Description
9
+
10
+ This model is a fine-tuned version of Qwen2.5-Coder-7B, specifically optimized for function writing tasks. The base model Qwen2.5-Coder-7B is part of the Qwen2.5-Coder family, which was trained on 5.5 trillion tokens including source code, text-code grounding, and synthetic data.
11
+
12
+ ### Base Model Details
13
+
14
+ * **Type**: Causal Language Model
15
+ * **Architecture**: Transformers with RoPE, SwiGLU, RMSNorm, and Attention QKV bias
16
+ * **Parameters**: 7.61B (6.53B Non-Embedding)
17
+ * **Layers**: 28
18
+ * **Attention Heads**: 28 for Q and 4 for KV
19
+ * **Context Length**: Up to 131,072 tokens
20
+
21
+ ## Fine-tuning Specifications
22
+
23
+ The model was fine-tuned using LoRA (Low-Rank Adaptation) with the following configuration:
24
+
25
+ ### Training Parameters
26
+
27
+ * **Training Data**: 30,000 examples
28
+ * **Batch Size**: 1 per device
29
+ * **Gradient Accumulation Steps**: 24
30
+ * **Learning Rate**: 1e-6
31
+ * **Number of Epochs**: 2
32
+ * **Warmup Ratio**: 0.05
33
+ * **Maximum Sequence Length**: 4,096 tokens
34
+ * **Weight Decay**: 0.01
35
+ * **Maximum Gradient Norm**: 0.5
36
+ * **Learning Rate Scheduler**: Cosine
37
+
38
+ ### LoRA Configuration
39
+
40
+ * **Rank (r)**: 32
41
+ * **Alpha**: 32
42
+ * **Dropout**: 0.05
43
+ * **Target Modules**: q_proj, v_proj, o_proj, gate_proj, up_proj
44
+ * **Training Mode**: BF16 mixed precision
45
+ * **RS-LoRA**: Enabled
46
+
47
+ ### Training Infrastructure
48
+
49
+ * **Quantization**: 4-bit quantization (NF4)
50
+ * **Attention Implementation**: Flash Attention 2
51
+ * **Memory Optimization**: Gradient checkpointing enabled
52
+
53
+ ## Usage
54
+
55
+ This model is optimized for function writing tasks and can be loaded using the Hugging Face Transformers library. Here's a basic example:
56
+
57
+ ```python
58
+ from transformers import AutoModelForCausalLM, AutoTokenizer
59
+
60
+ # Load the model and tokenizer
61
+ model = AutoModelForCausalLM.from_pretrained(
62
+ "path_to_your_model",
63
+ trust_remote_code=True,
64
+ torch_dtype=torch.bfloat16,
65
+ device_map="auto"
66
+ )
67
+ tokenizer = AutoTokenizer.from_pretrained(
68
+ "path_to_your_model",
69
+ trust_remote_code=True
70
+ )
71
+
72
+ # Generate text
73
+ input_text = "Write a function that..."
74
+ inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
75
+ outputs = model.generate(**inputs, max_new_tokens=500)
76
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
77
+ ```
78
+
79
+ ## Limitations
80
+
81
+ * The model is specifically fine-tuned for function writing tasks and may not perform optimally for general code generation or other tasks
82
+ * Maximum context length during fine-tuning was limited to 4,096 tokens
83
+ * While the base model supports up to 128K tokens, using beyond 4,096 tokens may require additional validation
84
+
85
+ ## License
86
+
87
+ This model inherits the Apache 2.0 license from its base model Qwen2.5-Coder-7B.
88
+
89
+ ## Citation
90
+
91
+ If you use this model, please cite both the original Qwen2.5-Coder paper and acknowledge the fine-tuning work:
92
+
93
+ ```bibtex
94
+ @article{hui2024qwen2,
95
+ title={Qwen2.5-Coder Technical Report},
96
+ author={Hui, Binyuan and Yang, Jian and Cui, Zeyu and Yang, Jiaxi and Liu, Dayiheng and Zhang, Lei and Liu, Tianyu and Zhang, Jiajun and Yu, Bowen and Dang, Kai and others},
97
+ journal={arXiv preprint arXiv:2409.12186},
98
+ year={2024}
99
+ }
100
+ ```