File size: 5,332 Bytes
ad8abf9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b6d0001
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ad8abf9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
---
base_model: Spestly/Atlas-Pro-1.5B-Preview
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- llama-cpp
- gguf-my-repo
license: mit
language:
- en
- zh
- fr
- es
- pt
- de
- it
- ru
- ja
- ko
- vi
- th
- ar
- fa
- he
- tr
- cs
- pl
- hi
- bn
- ur
- id
- ms
- lo
- my
- ceb
- km
- tl
- nl
datasets:
- openai/gsm8k
- HuggingFaceH4/ultrachat_200k
library_name: transformers
---

# Triangle104/Atlas-Pro-1.5B-Preview-Q4_K_M-GGUF
This model was converted to GGUF format from [`Spestly/Atlas-Pro-1.5B-Preview`](https://huggingface.co/Spestly/Atlas-Pro-1.5B-Preview) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Spestly/Atlas-Pro-1.5B-Preview) for more details on the model.

---
Model details:
-
Atlas Pro (Previously known as '🏆 Atlas-Experiment 0403 🧪' in AtlasUI) is an advanced language model (LLM) built on top of Atlas Flash.
 It's designed to provide exceptional performance for professional tasks
 like coding, mathematics, and scientific problem-solving. Atlas Pro 
builds on Atlas Flash by adding more fine-tuning and specialization, 
making it perfect for researchers and advanced users.




	
		
	

		Key Features
	



Improved Problem-Solving: Handles tricky tasks in programming, math, and sciences better than most models.
Advanced Code Generation: Produces clean and efficient code, but may still miss edge cases occasionally.
Domain Expertise: Focused on technical and scientific domains but works well in general contexts too.
Reasoning Improvement: In this version of Atlas, I 
have enhanced it's reasoning via synthetic data from models such as 
Gemini-2.0 Flash Thinking so that it can improve on reasoning.




	
		
	

		Intended Use Cases
	



Atlas Pro works best for:


Technical Professionals: Helping developers, engineers, and scientists solve complex problems.
Educational Assistance: Offering clear, step-by-step help for students and teachers.
Research Support: Assisting in theoretical and applied science work.
Enterprise Tools: Integrating into company workflows for smarter systems.




	
		
	

		NOTICE
	



Atlas Pro is built on Atlas Flash and improved to meet high standards. Here’s how it’s made:


Base Model: Built upon Atlas Flash, which is already quite capable.
Fine-Tuning Details:
Used datasets specific to programming, math, and scientific challenges and overall reasoning abilities.
Refined its performance for professional scenarios.


Performance Highlights:
Beats benchmarks with high accuracy, though occasional tweaks might still improve outputs.






	
		
	

		Limitations
	



Knowledge Cutoff: It doesn’t know about anything recent unless updated.
Hardware Requirements: Needs high-end GPUs to run smoothly.
Specialization Bias: While amazing in its focus areas, general chat capabilities might not be as good as other models.
Token Leakage: In some very rare cases (~1/167), Atlas Pro will experience some token leakage.




	
		
	

		Licensing
	



Atlas Pro is released under the MIT, which prohibits harmful uses. Make sure to follow the rules in the license agreement.




	
		
	

		Acknowledgments
	



Created by Spestly as part of the Astral Model Family, Atlas Pro builds on the strong foundation of Atlas Flash. Special thanks to Deepseek's R1 Qwen Distilles for helping make it happen.




	
		
	

		Usage
	



You can use Atlas Pro with this code snippet:


from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the Atlas Pro model
model_name = "Spestly/Atlas-R1-Pro-1.5B-Preview"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

# Generate a response
prompt = "Write a Python function to calculate the Fibonacci sequence."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=200)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)

---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/Atlas-Pro-1.5B-Preview-Q4_K_M-GGUF --hf-file atlas-pro-1.5b-preview-q4_k_m.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/Atlas-Pro-1.5B-Preview-Q4_K_M-GGUF --hf-file atlas-pro-1.5b-preview-q4_k_m.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Atlas-Pro-1.5B-Preview-Q4_K_M-GGUF --hf-file atlas-pro-1.5b-preview-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/Atlas-Pro-1.5B-Preview-Q4_K_M-GGUF --hf-file atlas-pro-1.5b-preview-q4_k_m.gguf -c 2048
```