-
-
-
-
-
-
Inference status
Active filters:
minitron
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python_Spanish_English_16bit
Updated
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python_English_Asistant-16bit-v2
Updated
Agnuxo/Meta-Llama-3.1-8B-Instruct-Depth-Base-Instruct_CODE_Python_Spanish_English_lora_model
Agnuxo/Meta-Llama-3.1-8B-Instruct_CODE_Python_Spanish_English_16bit
Updated
Agnuxo/Meta-Llama-3.1-8B-Instruct_CODE_Python_English_Asistant-16bit-v2
Updated
Agnuxo/Meta-Llama-3.1-8B-TinyLlama-Instruct_CODE_Python-extra_small_quantization_GGUF_3bit
Updated
Agnuxo/Meta-Llama-3.1-8B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Updated
Agnuxo/Meta-Llama-3.1-8B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q5_k
Updated
Agnuxo/Meta-Llama-3.1-8B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q6_k
Updated
Agnuxo/Meta-Llama-3.1-8B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Updated
Agnuxo/Meta-Llama-3.1-8B-Instruct_CODE_Python_English_GGUF_16bit
Updated
Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-extra_small_quantization_GGUF_3bit
Updated
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
Updated
Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q5_k
Updated
Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q6_k
Updated
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
Updated
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python_English_GGUF_16bit
Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_32bit
Updated
mradermacher/Meta-Llama-3.1-8B-Instruct_CODE_Python_English_Asistant-16bit-v2-GGUF
Updated
•
143
mradermacher/Meta-Llama-3.1-8B-Instruct_CODE_Python_English_Asistant-16bit-v2-i1-GGUF
Updated
•
539
•
2