Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
onekq-ai
's Collections
Ollama-ready Coding Models
QLora-ready Coding Models
Ollama-ready Coding Models
updated
Oct 19, 2024
For inference. CPU is enough for both quantization and inference.
Upvote
2
QuantFactory/starcoder2-15b-GGUF
Text Generation
•
Updated
Sep 6, 2024
•
46
•
2
bartowski/starcoder2-15b-instruct-GGUF
Text Generation
•
Updated
Mar 6, 2024
•
336
•
2
bartowski/starcoder2-15b-instruct-v0.1-GGUF
Text Generation
•
Updated
Apr 30, 2024
•
187
•
6
QuantFactory/starcoder2-7b-GGUF
Text Generation
•
Updated
Sep 5, 2024
•
98
•
2
QuantFactory/starcoder2-3b-GGUF
Text Generation
•
Updated
Sep 5, 2024
•
478
•
3
QuantFactory/starcoder2-3b-instruct-GGUF
Text Generation
•
Updated
Oct 18, 2024
•
52
•
1
QuantFactory/starcoder2-3b-instruct-v0.1-GGUF
Updated
Oct 18, 2024
•
142
•
1
QuantFactory/Qwen2.5-Coder-7B-GGUF
Text Generation
•
Updated
Sep 19, 2024
•
339
•
2
Qwen/Qwen2.5-7B-Instruct-GGUF
Text Generation
•
Updated
Sep 20, 2024
•
15k
•
42
QuantFactory/Qwen2.5-Coder-1.5B-GGUF
Text Generation
•
Updated
Sep 19, 2024
•
329
•
2
Qwen/Qwen2.5-1.5B-Instruct-GGUF
Text Generation
•
Updated
Sep 20, 2024
•
41.3k
•
25
bartowski/DeepSeek-Coder-V2-Lite-Instruct-GGUF
Text Generation
•
Updated
Jun 20, 2024
•
13.7k
•
92
QuantFactory/DeepSeek-Coder-V2-Lite-Base-GGUF
Text Generation
•
Updated
Jun 24, 2024
•
184
•
1
QuantFactory/starcoder2-7b-instruct-GGUF
Text Generation
•
Updated
Oct 18, 2024
•
169
•
1
Upvote
2
Share collection
View history
Collection guide
Browse collections