Update README.md
Browse files
README.md
CHANGED
@@ -16,34 +16,34 @@ tags:
|
|
16 |
|
17 |
# **Taurus-Opus-7B**
|
18 |
|
19 |
-
Taurus-Opus-7B
|
20 |
|
21 |
-
# **Key Improvements**
|
22 |
|
23 |
-
|
24 |
-
Despite being a 7B-parameter model, Taurus-Opus demonstrates powerful reasoning and understanding capabilities comparable to larger models due to advanced optimization techniques.
|
25 |
|
26 |
-
|
27 |
-
|
28 |
|
29 |
-
|
30 |
-
|
31 |
|
32 |
-
|
33 |
-
|
34 |
|
35 |
-
|
36 |
-
|
37 |
|
38 |
-
|
39 |
-
|
40 |
|
41 |
# **Quickstart with transformers**
|
42 |
|
|
|
|
|
43 |
```python
|
44 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
45 |
|
46 |
-
model_name = "
|
47 |
|
48 |
model = AutoModelForCausalLM.from_pretrained(
|
49 |
model_name,
|
@@ -52,9 +52,9 @@ model = AutoModelForCausalLM.from_pretrained(
|
|
52 |
)
|
53 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
54 |
|
55 |
-
prompt = "Explain
|
56 |
messages = [
|
57 |
-
{"role": "system", "content": "You are
|
58 |
{"role": "user", "content": prompt}
|
59 |
]
|
60 |
text = tokenizer.apply_chat_template(
|
@@ -66,7 +66,7 @@ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
|
66 |
|
67 |
generated_ids = model.generate(
|
68 |
**model_inputs,
|
69 |
-
max_new_tokens=
|
70 |
)
|
71 |
generated_ids = [
|
72 |
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
@@ -74,37 +74,39 @@ generated_ids = [
|
|
74 |
|
75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
76 |
```
|
77 |
-
|
78 |
# **Intended Use**
|
79 |
|
80 |
-
1. **Reasoning and
|
81 |
-
|
82 |
|
83 |
-
2. **Mathematical
|
84 |
-
|
85 |
|
86 |
3. **Code Assistance**:
|
87 |
-
Provides support
|
|
|
|
|
|
|
88 |
|
89 |
-
|
90 |
-
|
91 |
|
92 |
-
|
93 |
-
|
94 |
|
95 |
# **Limitations**
|
96 |
|
97 |
-
1. **
|
98 |
-
While efficient
|
99 |
|
100 |
-
2. **
|
101 |
-
|
102 |
|
103 |
-
3. **
|
104 |
-
|
105 |
|
106 |
-
4. **
|
107 |
-
|
108 |
|
109 |
-
5. **Prompt
|
110 |
-
|
|
|
16 |
|
17 |
# **Taurus-Opus-7B**
|
18 |
|
19 |
+
Taurus-Opus-7B is built upon the LLaMA (Large Language Model Meta AI) 7B architecture, optimized to provide advanced reasoning capabilities while maintaining efficiency. With 7 billion parameters, it strikes a balance between performance and computational resource requirements. The model has been fine-tuned with a focus on chain-of-thought (CoT) reasoning, leveraging specialized datasets to enhance its problem-solving abilities. Taurus-Opus-7B is designed for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and coding assistance.
|
20 |
|
|
|
21 |
|
22 |
+
# **Key Features and Improvements**
|
|
|
23 |
|
24 |
+
1. **Optimized Reasoning Capabilities**:
|
25 |
+
The model showcases significant improvements in context understanding, reasoning, and mathematical problem-solving through fine-tuning with long CoT datasets.
|
26 |
|
27 |
+
2. **Enhanced Instruction Following**:
|
28 |
+
Taurus-Opus-7B excels in generating long, coherent outputs (up to 4K tokens), understanding structured data, and producing structured outputs like JSON.
|
29 |
|
30 |
+
3. **Lightweight Efficiency**:
|
31 |
+
Its 7B parameter size makes it more resource-efficient compared to larger models while retaining high-quality performance for reasoning and content generation tasks.
|
32 |
|
33 |
+
4. **Long-Context Support**:
|
34 |
+
Offers support for long contexts of up to 64K tokens, enabling the handling of large datasets or extended conversations.
|
35 |
|
36 |
+
5. **Multilingual Proficiency**:
|
37 |
+
The model supports 20+ languages, including English, Spanish, French, German, Portuguese, Chinese, Japanese, and more, making it suitable for global applications.
|
38 |
|
39 |
# **Quickstart with transformers**
|
40 |
|
41 |
+
Here’s a code snippet to load **Taurus-Opus-7B** using the `transformers` library:
|
42 |
+
|
43 |
```python
|
44 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
45 |
|
46 |
+
model_name = "your-organization/Taurus-Opus-7B"
|
47 |
|
48 |
model = AutoModelForCausalLM.from_pretrained(
|
49 |
model_name,
|
|
|
52 |
)
|
53 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
54 |
|
55 |
+
prompt = "Explain the importance of chain-of-thought reasoning in large language models."
|
56 |
messages = [
|
57 |
+
{"role": "system", "content": "You are a helpful assistant with expertise in logical reasoning and problem-solving."},
|
58 |
{"role": "user", "content": prompt}
|
59 |
]
|
60 |
text = tokenizer.apply_chat_template(
|
|
|
66 |
|
67 |
generated_ids = model.generate(
|
68 |
**model_inputs,
|
69 |
+
max_new_tokens=512
|
70 |
)
|
71 |
generated_ids = [
|
72 |
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
|
|
74 |
|
75 |
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
76 |
```
|
|
|
77 |
# **Intended Use**
|
78 |
|
79 |
+
1. **Reasoning and Context Understanding**:
|
80 |
+
Taurus-Opus-7B is tailored for complex reasoning tasks, contextual understanding, and solving problems requiring logical deduction.
|
81 |
|
82 |
+
2. **Mathematical Problem-Solving**:
|
83 |
+
Designed for advanced mathematical reasoning and calculations, making it valuable for education, research, and engineering tasks.
|
84 |
|
85 |
3. **Code Assistance**:
|
86 |
+
Provides robust coding support, including writing, debugging, and optimizing code across multiple programming languages.
|
87 |
+
|
88 |
+
4. **Data Analysis**:
|
89 |
+
Excels in analyzing structured data and generating structured outputs, aiding automation workflows and data-driven insights.
|
90 |
|
91 |
+
5. **Multilingual Support**:
|
92 |
+
Facilitates applications such as multilingual chatbots, content generation, and translation in 20+ languages.
|
93 |
|
94 |
+
6. **Extended Content Generation**:
|
95 |
+
Suitable for generating detailed reports, articles, and instructional guides, handling outputs up to 4K tokens.
|
96 |
|
97 |
# **Limitations**
|
98 |
|
99 |
+
1. **Hardware Requirements**:
|
100 |
+
While more efficient than larger models, Taurus-Opus-7B still requires high-memory GPUs or TPUs for optimal performance.
|
101 |
|
102 |
+
2. **Language Quality Variations**:
|
103 |
+
Output quality may vary across supported languages, especially for less commonly used languages.
|
104 |
|
105 |
+
3. **Creativity Limitations**:
|
106 |
+
The model may sometimes generate repetitive or inconsistent results in creative or highly subjective tasks.
|
107 |
|
108 |
+
4. **Real-Time Knowledge Constraints**:
|
109 |
+
The model lacks awareness of events or knowledge updates beyond its training data.
|
110 |
|
111 |
+
5. **Prompt Dependency**:
|
112 |
+
Results heavily depend on the specificity and clarity of input prompts, requiring well-structured queries for the best performance.
|