-
-
-
-
-
-
Inference status
Active filters:
int4
modelscope/Yi-1.5-9B-Chat-AWQ
Text Generation
•
Updated
•
22
modelscope/Yi-1.5-34B-Chat-GPTQ
Text Generation
•
Updated
•
35
•
1
jojo1899/Phi-3-mini-128k-instruct-ov-int4
Text Generation
•
Updated
•
17
jojo1899/Llama-2-13b-chat-hf-ov-int4
Text Generation
•
Updated
•
229
jojo1899/Mistral-7B-Instruct-v0.2-ov-int4
Text Generation
•
Updated
•
232
model-scope/glm-4-9b-chat-GPTQ-Int4
Text Generation
•
Updated
•
51
•
6
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
Updated
•
210
•
3
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
Updated
•
32
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
Updated
•
334
•
4
ModelCloud/Mistral-Large-Instruct-2407-gptq-4bit
Text Generation
•
Updated
•
308
•
1
neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w4a16
Text Generation
•
Updated
•
67k
•
23
angeloc1/llama3dot1SimilarProcesses4
Text Generation
•
Updated
•
16
angeloc1/llama3dot1DifferentProcesses4
Text Generation
•
Updated
•
14
ModelCloud/Meta-Llama-3.1-405B-Instruct-gptq-4bit
Text Generation
•
Updated
•
21
•
2
ModelCloud/EXAONE-3.0-7.8B-Instruct-gptq-4bit
Updated
•
2
•
3
angeloc1/llama3dot1FoodDel4v05
Text Generation
•
Updated
•
16
ModelCloud/GRIN-MoE-gptq-4bit
Updated
•
3
•
6
joshmiller656/Llama3.2-1B-AWQ-INT4
Updated
•
41
Advantech-EIOT/intel_llama-3.1-8b-instruct
ModelCloud/Llama-3.2-1B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
Updated
•
149
•
2
jojo1899/llama-3_1-8b-instruct-ov-int4
tclf90/qwen2.5-72b-instruct-gptq-int4
Text Generation
•
Updated
•
208
jojo1899/Phi-3.5-mini-instruct-ov-int4
neuralmagic/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic
Text Generation
•
Updated
•
4
neuralmagic/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-quantized.w4a16
Text Generation
•
Updated
•
12
nintwentydo/pixtral-12b-2409-W4A16-G128
Image-Text-to-Text
•
Updated
•
147